1 / 18

Version Space using DNA Computing

Version Space using DNA Computing. 2001.10.12 임희웅. Version Space. Version Space? Concept Learning Classifying given instance x Maintain a set of hypothesis that is consistent with the training examples Instance X described by the tuple of attributes Attributes Dept, {ee, cs}

kevork
Télécharger la présentation

Version Space using DNA Computing

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Version Space using DNA Computing 2001.10.12 임희웅

  2. Version Space • Version Space? • Concept Learning • Classifying given instance x • Maintain a set of hypothesis that is consistent with the training examples • Instance X • described by the tuple of attributes • Attributes • Dept, {ee, cs} • Status,{faculty, staff} • Floor,{four, five}

  3. Version Space • Hypotheses H • Each hypothesis is described by a conjunction of constraints on the attributes • Ex) <cs, faculty> or <cs> • Target concept • X  {0, 1} • Training example D • <cs, faculty, four> + • <cs, faculty, five> + • <ee, faculty, four> - • <cs, staff, five> -

  4. cs ee faculty staff four five cs ∧ faculty cs ∧ staff ee ∧ faculty ee ∧ staff faculty ∧ four faculty ∧ five cs ∧ faculty ∧ four cs ∧ faculty ∧ five ee ∧ faculty ∧ four cs ∧ staff ∧ five Version Space

  5. cs ee faculty staff four five cs ∧ faculty cs ∧ staff ee ∧ faculty ee ∧ staff faculty ∧ four faculty ∧ five cs ∧ faculty ∧ four cs ∧ faculty ∧ five ee ∧ faculty ∧ four cs ∧ staff ∧ five +

  6. cs ee faculty staff four five cs ∧ faculty cs ∧ staff ee ∧ faculty ee ∧ staff faculty ∧ four faculty ∧ five cs ∧ faculty ∧ four cs ∧ faculty ∧ five ee ∧ faculty ∧ four cs ∧ staff ∧ five + –

  7. cs ee faculty staff four five cs ∧ faculty cs ∧ staff ee ∧ faculty ee ∧ staff faculty ∧ four faculty ∧ five cs ∧ faculty ∧ four cs ∧ faculty ∧ five ee ∧ faculty ∧ four cs ∧ staff ∧ five + – +

  8. cs ee faculty staff four five cs ∧ faculty cs ∧ staff ee ∧ faculty ee ∧ staff faculty ∧ four faculty ∧ five cs ∧ faculty ∧ four cs ∧ faculty ∧ five ee ∧ faculty ∧ four cs ∧ staff ∧ five + + – –

  9. Version Space using DNA Computing • Problem Definition • Attributes • Dept,{ee, cs} • Status,{faculty, staff} • Floor,{four, five} • Training example D • <cs, faculty, four> + • <cs, faculty, five> + • <ee, faculty, four> - • <cs, staff, five> -

  10. Encoding(1) • Attribute사이에 순서를 고려할 경우 각각의 attribute의 값들을 하나의 기본 DNA sequence로 표현하고 이러한 기본 DNA sequence들을 서로 다른 attribute에 속하는 sequence들 끼리 ligation 될 수 있도록 sticky end조건을 준다. • 이 경우 <cs, faculty>나 <faculty, four>와 같은 것은 생성되지만 <cs, four>는 생성되지 않는다. • Hypothesis들간의 분포는?

  11. Encoding(2) • Attribute들 간의 순서를 고려하지 않을 경우 • Adleman 실험의 encoding을 이용 • Attribute value : vertex • Ligation of Attribute value : edge  Complete graph, Overhead

  12. Dept Bead Status Floor + 각각의 attribute에 해당하는 dummy sequence Encoding(3) • Bead의 이용 • 앞의 Adleman의 encoding방법을 사용하는 것보다 훨씬 적은 수의 sequence가 필요함 • 또한 가능한 모든 hypothesis를 한꺼번에 생성할 수도 있고 특정한 example에 대해서 consistent한 모든 hypothesis를 모두 생성할 수도 있음

  13. Detection(1) • Training example의 구성 • Attribute value에 대한 complementary strand를 이용해서 구성 • Positive example의 경우 • 용액에 위와 같이 구성된 positive example을 넣어 example strand와 완전히 붙으면 consistent, 그렇지 않으면 inconsistent • Negative example의 경우 • 용액에 위와 같이 구성된 negative example을 넣어 example strand와 완전히 붙으면 inconsistent, 그렇지 않으면 consistent • 고려해야 할 사항 • inconsistent strand가 example strand와 붙지는 않음(평형상수)

  14. Detection(2) • Encoding(2)의 경우 • 1. 먼저 초기 example(positive라고 가정)에 대해서 그와 consistent한 모든 hypothesis를 생성한다.  Tube1(0) • 2. 다음 example들이 차례로 들어오면 그 example과 consistent한 모든 hypothesis를 생성하여( Tube2) 다음과 같은 작업을 반복한다. • Positive일 경우 • Tube1(n+1) = Tube1(n) ∩ Tube2 • Negative일 경우 • Tube1(n+1) = Tube1(n) - Tube2

  15. Detection(3) • Primitive operation • ∩, - • 위의 두 연산을 어떻게 구현할 것인가?

  16. Detection(4) • Bead를 이용한 encoding의 경우 • 가능한 전체 hypothesis를 한꺼번에 생성할 수도 있고 특정 example에 대해 consistent한 모든 hypothesis를 생성할 수도 있음

  17. Application • 실제 Classification을 어떻게 할 것인가? • Voting?

  18. Reference on Version Space • Machine Learning, T.M. Mitchell, McGraw Hill • Artificial Intelligence-Theory and Practice, Dean, Addison-Wesley

More Related