1 / 26

Learning DFA from corrections

Learning DFA from corrections. Leonor Becerra-Bonache, Cristina Bibire, Adrian Horia Dediu Research Group on Mathematical Linguistics, Rovira i Virgili University Pl. Imperial Tarraco 1, 43005, Tarragona, Spain E-mail: {leonor.becerra,cristina.bibire,adrianhoria.dediu}@estudiants.urv.es.

elsu
Télécharger la présentation

Learning DFA from corrections

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Learning DFA from corrections Leonor Becerra-Bonache, Cristina Bibire, Adrian Horia Dediu Research Group on Mathematical Linguistics, Rovira i Virgili University Pl. Imperial Tarraco 1, 43005, Tarragona, Spain E-mail: {leonor.becerra,cristina.bibire,adrianhoria.dediu}@estudiants.urv.es

  2. Outline • Learning from queries • Learning from corrections • Comparative results • Concluding remarks • Further research • Bibliography

  3. Learning from queries In the last four decades three important formal models have been developed within Computational Learning Theory: Gold's model of identification in the limit [4], the query learning model of Angluin [1,2] and the PAC learning model of Valiant [7]. Our paper is focused on learning DFA within the framework of query learning. Learning from queries was introduced by Dana Angluin in 1987 [1]. She gave an algorithm for learning DFA from membership and equivalence queries and she was the first who proved learnability of DFA via queries. Later, Rivest and Schapire in 1993 [6], Hellerstein et al. in 1995 [5] or Balcazar et al. in 1996 [3] developed more efficient versions of the same algorithm trying to increase the parallelism level, to reduce the number of equivalence queries, etc.

  4. Learning from queries • In query learning, there is a teacher that knows the language and has to answer correctly specific kind of queries asked by the learner. In Angluin’s algorithm, the learner asks two kinds of queries: • membership query • - consists of a string s; the answer is YES or NO depending on whether s is the member of the unknown language or not. • equivalence query • - is a conjecture, consisting of a description of a regular set U. The answer is YES if U is equal to the unknown language and is a string s in the symmetric difference of U and the unknown language otherwise.

  5. Learning from corrections In Angluin's algorithm, when the learner asks about a word in the language, the teacher's answer is very simple, YES or NO.

  6. Learning from corrections • In Angluin's algorithm, when the learner asks about a word in the language, the teacher's answer is very simple, YES or NO. • Our idea was to introduce a new type of query: • correction query • - it consists of a string s; the teacher has to return the smallest string s' such that s.s' belongs to the target language.

  7. Learning from corrections • In Angluin's algorithm, when the learner asks about a word in the language, the teacher's answer is very simple, YES or NO. • Our idea was to introduce a new type of query: • correction query • - it consists of a string s; the teacher has to return the smallest string s' such that s.s' belongs to the target language. • Formally, for a string , • where is the left quotient of by : • where is an automaton accepting .

  8. Learning from corrections Observation table An observation table consists of: a non-empty prefix-closed set S of strings, a non-empty suffix-closed set E of strings, and the restriction of the mapping C to the set . e s C(s.e)

  9. Learning from corrections Closed, consistent observation tables For any , denotes the finite function from E to defined by An observation table is called closed if An observation table is called consistent if

  10. a a a a 2 1 3 0 Learning from corrections Example: Is it closed?

  11. a a a a 2 1 3 0 Learning from corrections Example: Is it closed? Yes

  12. a a a a 2 1 3 0 Learning from corrections Example: Is it consistent? row(a)=row(aa) row(a.a)=row(aa.a)

  13. a a a a 2 1 3 0 Learning from corrections Example: Is it consistent? No row(a)=row(aa) row(a.a)=row(aa.a)

  14. Learning from corrections

  15. Learning from corrections Remark 1 C(α)=βγ implies C(αβ)=γ Remark 2 C(α)=φ implies C(αβ)=φ

  16. Learning from corrections

  17. Learning from corrections Lemma 3 1. 2. Sketch of the proof: Remark 3: 1. 2. . . .

  18. Learning from corrections … To conclude we show that if t is the smallest string s.t. then .

  19. Learning from corrections Lemma 4 Assume that (S,E,C) is a closed, consistent observation table. Suppose the automaton A(S,E,C) has n states. If is any automaton consistent with C that has n or fewer states then is isomorphic with A(S,E,C). Sketch of the proof: We define the function 1. is well defined 2. is bijective 3. 4. 5. The proof of Theorem 1 follows, since Lemma 3 shows that A(S,E,C) is consistent with C, and Lemma 4 shows that any other automaton consistent with C is either isomorphic to A(S,E,C) or contains at least one more state. Thus, A(S,E,C) is the unique smallest automaton consistent with C.

  20. Learning from corrections • Correctness • If the teacher answers correctly then if LCA ever terminates it is clear that it outputs the target automaton • Termination • Lemma 5. Let (S,E,C) be an observation table. Let n denote the number of different values of row(s) for s in S. Any automaton consistent with C must have at least n states. • Time analysis • The total running time of LCA can be bounded by a polynomial in n and m.

  21. Comparative results Comparative results for different languages using L* and LCA

  22. Comparative results Comparative results for different languages using L* and LCA

  23. Concluding remarks • We have improved Angluin's query learning algorithm by replacing MQ with CQ. This approach allowed us to use a smaller number of queries and, in this way, the learning time is reduced. • One of the reasons of this reduction is that an answer to a CQ contains embedded much more information. • Another advantage of our approach is that we can differentiate better between states. • Among the improvements previously discussed, we would like to mention here the adequacy of CQ's in a real learning process. They reflect in a more accurate manner the process of children's language acquisition. We are aware that this kind of formalism is for an ideal teacher who knows everything and always gives the correct answers, which is an ideal situation. The learning of a natural language is an infinite process.

  24. Further research • To prove that the number of CQs is always smaller than the number of MQs • To prove that the number of EQs is always less or equal • To prove the following conjectures: • To show that we have improved on the running time • CQs are more expensive than MQs. How much does this affect the total running time?

  25. Bibliography [1] D. Angluin, Learning regular sets from queries and counterexamples. Information and Computation75, 1987, 87-106. [2] D. Angluin, Queries and concept learning. Machine Learning2, 1988, 319-342. [3] J. L. Balcázar, J. Díaz, R. Gavaldá, O. Watanabe, Algorithms for learning finite automata from queries: A unified view. Chapter in Advances in Algorithms, Languages and Complexity. Kluwer Academic Publishers, 1997, 73-91. [4] E. M. Gold, Identification in the limit. Information and Control10, 1967, 447-474. [5] L. Hellerstein, K. Pillaipakkamnatt, V. Raghavan, D.Wilkins, How many queries are needed to learn? Proc. 27th Annual ACM Symposium on the Theory of Computing. ACM Press, 1995, 190-199. [6] R. L. Rivest, R. E. Schapire, Inference of finite automata using homing sequences. Information and Computation103(2), 1993, 299-347. [7] L. G.Valiant, A theory of the learnable. Communication of the ACM27, 1984, 1134-1142.

  26. Thank You!

More Related