1 / 12

Pertemuan 9 JARINGAN LEARNING VECTOR QUANTIZATION

Pertemuan 9 JARINGAN LEARNING VECTOR QUANTIZATION. Matakuliah : H0434/Jaringan Syaraf Tiruan Tahun : 2005 Versi : 1. Learning Outcomes. Pada akhir pertemuan ini, diharapkan mahasiswa akan mampu : Mendemonstrasikan Jaringan Learning Vector Quantization. Outline Materi.

tara-mcleod
Télécharger la présentation

Pertemuan 9 JARINGAN LEARNING VECTOR QUANTIZATION

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Pertemuan 9 JARINGAN LEARNING VECTOR QUANTIZATION Matakuliah : H0434/Jaringan Syaraf Tiruan Tahun : 2005 Versi : 1

  2. Learning Outcomes Pada akhir pertemuan ini, diharapkan mahasiswa akan mampu : • Mendemonstrasikan Jaringan Learning Vector Quantization

  3. Outline Materi • Arsitektur Jaringan • Learning Rule

  4. Learning Vector Quantization The net input is not computed by taking an inner product of the prototype vectors with the input. Instead, the net input is the negative of the distance between the prototype vectors and the input.

  5. Subclass For the LVQ network, the winning neuron in the first layer indicates the subclass which the input vector belongs to. There may be several different neurons (subclasses) which make up each class. The second layer of the LVQ network combines subclasses into a single class. The columns of W2 represent subclasses, and the rows represent classes. W2 has a single 1 in each column, with the other elements set to zero. The row in which the 1 occurs indicates which class the appropriate subclass belongs to.

  6. Example • Subclasses 1, 3 and 4 belong to class 1. • Subclass 2 belongs to class 2. • Subclasses 5 and 6 belong to class 3. A single-layer competitive network can create convex classification regions. The second layer of the LVQ network can combine the convex regions to create more complex categories.

  7. LVQ Learning LVQ learning combines competive learning with supervision. It requires a training set of examples of proper network behavior. If the input pattern is classified correctly, then move the winning weight toward the input vector according to the Kohonen rule. If the input pattern is classified incorrectly, then move the winning weight away from the input vector.

  8. ì ü ì ü 1 0 1 1 p t p t = , = = , = í ý í ý 2 2 3 3 0 1 0 1 î þ î þ Example

  9. æ ö 1 w p – – ç ÷ 1 1 ç ÷ 1 ç ÷ w p – – 1 1 2 1 a c o m p e t n c o m p e t = ( ) = ç ÷ ç ÷ 1 w p – – ç ÷ 3 1 ç ÷ 1 w p – – è ø 4 1 æ ö T T ç ÷ – – 0.25 0.75 0 1 ç ÷ æ ö – 0.354 1 ç ÷ T T ç ÷ – – ç ÷ 0.75 0.75 0 1 – 0.791 0 1 ç ÷ a c o m p e t c o m p e t = = = ç ÷ ç ÷ T T ç ÷ – 1.25 0 ç ÷ – – ç 1.00 0.25 0 1 ÷ è ø – 0.901 0 ç ÷ T T ç ÷ – – 0.50 0.25 0 1 è ø First Iteration

  10. Second Layer This is the correct class, therefore the weight vector is moved toward the input vector.

  11. Figure

  12. Final Decision Regions

More Related