1 / 40

Situation decomposition method extracts partial data which contains some rules.

Situation decomposition method extracts partial data which contains some rules. Hiroshi Yamakawa (FUJITSU LABORATORIES LTD.). Abstract.

zoie
Télécharger la présentation

Situation decomposition method extracts partial data which contains some rules.

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Situation decomposition method extracts partial data which contains some rules. Hiroshi Yamakawa (FUJITSU LABORATORIES LTD.)

  2. Abstract • In an infantile development process, the fundamental knowledge about the external world is acquired through learning without clear purposes. An adult is considered to use that fundamental knowledge for various works. The acquisition of the internal model in these early stages may exist as a background of the flexible high order function of human brain. However, research of such learning technology is not progressing to nowadays. • The system can improves prediction ability and reusability in the lasting work by using the result of learning without clear purposes. Then, we proposed the situation decomposition technology which chooses the partial information which emphasizes the relation "another attribute value will also change if one attribute value changes." • Situation decomposition technology is the technology of performing attribute selection and case selection simultaneously from the data structure from which each example constitutes an attribute vector. The newly introduced Matchability criteria are the amount of evaluations which becomes large, when the explanation range of the selected partial information becomes large and a strong relation exists in the inside. Processing of situation decomposition extracts plural partial situations (result of attribute selection and case selection) of corresponding to the local maximum points over this evaluation. • Furthermore, extraction of partial problem space (based on the Markov decision process) is possible using the technology which extended situation decomposition in the direction of time. In action decision task, such as robot control, partial problem space can be assigned as each module of multi-module architecture. Then, it can be efficiently adapted to unknown problem space by combining extracted plural partial problem space.

  3. My strategy for Brain-like processing • Brain has very flexible learning ability. • The intelligent processes which has more flexible learning abilities are more close to real brain processes. • I want introduce learning ability to my system as possible.

  4. Contents • Development and Autonomous Learning • SOIS (Self-organizing Information Selection) as Pre-task Learning • Delivering Matchable Principle • Situation Decomposition using Matchability Criterion • Application of Situation Decomposition • Conclusions & Future works

  5. Outline of this talk Cognitive Development Autonomous Learning (Framework) Task learning Pre-task learning Self-organizingInformation Selection Matchable Principle Situation decomposition Matchability Criterion Situation Decomposition using Matchability Criterion

  6. Development and Autonomous Learning

  7. Two aspects of Development • “Acquired environmental knowledge without particular goals which helps for problem solving for particular goals” • → “Pre-task Learning” in Autonomous Learning • “Calculation process which increases the predictable and/or operable object in the world” • → Enhancing prediction ability

  8. Autonomous Learning: AL • Two phases learning (Research in RWC) Environment is given Goal is given Existing Knowledge Pre-task learning Task learning General fact For design Acquiring environmentalknowledge Acquiring solution for goal goal No reachingover the wall Acquiringmovable paths Generatingpath to the goal Development Today’s Topic

  9. Pre-task Learning helps Task Learning Development Today’s Topic • Autonomous Learning (AL) • Pre-task Learning • Acquiring environmental knowledge without particular goal. • Task Learning • Environmental knowledge speed up aacquiring solution for goal. • In human: • Adult people can solve given task quickly using environmental knowledge acquired for other goal or without particular goal. Development ~ Pre-task Learning

  10. Research topics for AL Development Today’s Topic • Pre-task Learning (How to acquire environmental knowledge) • Situation Decomposition using Matchability criterion • Situation Decomposition is kind of a Self-organizing Information Selection technology. • Task learning (How to use environmental knowledge) • CITTA (Cognition based Intelligent Transaction Architecture) • Multi-module architecture which can combining environmental knowledge acquired during Pre-task learning • Cognitive Distance Learning • Goal driven problem solver for each environmental knowledge.

  11. Overview of Approaching for AL CITTA Combining environmental knowledge Architecture SituationDecomposition Acquiring environmental knowledge Cognitive Distance Learning Problem solver for each environmental knowledge Learning algorithm Pre-task Learning Task Learning

  12. SOIS (Self-organizing Information Selection) as Pre-task Learning

  13. SOIS: Self-organizing Information Selection • Process: Selecting plural partial information from data. • → “Situation Decomposition” • Criterion: Evaluation for each partial information. • → Matchability Criterion Knowledge = Set of structure. Partial Information = One kind of structure ※ SOIS could be a kind of knowledge acquiring process in development

  14. Situation Decomposition is kind of SOIS For situation decompositionPartial Information = Situation • Extracting plural situations which are combination of selected attributes and cases from spread sheet. MS1 MS3 Cases MS2 MS4 attributes

  15. Delivering Matchable Principle

  16. Two aspects of Development • “Acquired environmental knowledge without particular goals which helps solving problem for particular goals” • → “Pre-task Learning” in Autonomous Learning • “Calculation process which increases the predictable and/or operable object in the world” • → Enhancing prediction ability

  17. How to enhance prediction ability • We needs Criterion for selecting situation. • We wants to extract local structures. MS2 MS1 Situation Decomposition MS4 MS3 Multiplex local structure is mixed in real world data

  18. Extracting structure (knowledge) without particular goals. Prediction is based on matching a case with experiences. Deriving Matchable Principal • What is Criterion for each selecting situation. • Matchable principle • “Structures where a matching opportunity is large are selected.”

  19. Factors in Matchable Principle • To increase matching opportunity Relation in Structure Simplicity of Structure Our proposed Matchability criterion Ockham’s razor MDL、AIC Coverage for Data Consistency for Data Association rule Case-increasing Attribute-increasing Accuracy Minimize error

  20. SD (Situation Decomposition ) andImplementation

  21. MS1 MS3 Cases MS2 MS4 attributes Situation Decomposition • Extracting plural situations which are combination of selected attributes and cases from spread sheet. Matchability=This criteria evaluates matching opportunity Matchable Situation = Local maximums of Matchability

  22. Formalization: Whole situation and Partial situations • Whole situation J=(D, N) : Contains N attributes and D cases. • Attribute selection vector: • d = (d1, d2,…,dD) • Case selection vector : • n = (n1, n2,…,nN) Vector element di,ni are binary indicator of selection/unselection. • Number of selected attributes: d • Number of selected cases : n Situation decomposition extracts some matchable situations from whole situation J=(D, N) which potentially contains 2D+N partial situation.

  23. Case selection using Segment space Segment space is multiplication of separation of each selected attributes. (example: two dimension) Sd=s1s2 n: Number of selected cases Sd: Number of total segments rd: Number of selected segments attribute2 attribute1 ※ Cases inside the chosen segments are surely chosen.

  24. Matchability criterion from Matchable Principle [Number of selected segments] rd →Make Smaller Simplicity of Structure • [Number of selected cases] • n→Make Larger • [Number of total segments] • Sd →Make Larger Coverage for Data rd n n rd Sd N: Total number of cases, C1, C2, C3: Positive constant

  25. Matchability Focuses in covariance • Types of Relations • Coincidence • The relation to which two events happen simultaneously • Covariance • The relation that another attribute value will also change if one attribute value changes Matchability: • Estimates covariancein selected data for categorical attributes.

  26. How to find situations Algorithms searches local maximums of Matchability Criterion. • Algorithm Overview • foreach subset of d of D • Search Local maximums • Reject saddle point • end • Time complexity ∝ 2D

  27. Input situation Mixture of cases on two plains. Situation A: x + z = 1 Situation B: y +z = 1 Extracted situation Input Situations MS1=Input Situation A MS2= Input Situation B A New Situation MS3: line x = y, x + z = 1 Simple example

  28. Generalization ability • Multi-valued function φ:(x,y)→z • Even if the input situation A (x+z=1) lacks half of its parts, such that no data exists in the range y>0.5, our method outputs φMS1(0,1)=1.0.

  29. Applications of Situation Decomposition (SD)

  30. Multi-module Prediction System Input Output

  31. Training cases and Test cases • ●Training cases 500 cases are sprayed on each plain in uniform distribution in the range x=[0.0, 1.0] and y=[0.0, 1.0]. • ●Test cases 11×11 cases are arranged to notches at a regular interval of 0.1 on each plane q: sampling rate

  32. Prediction Result without Matchable Situation with Matchable Situation

  33. Autonomous Learning: AL • Two step learning (Research in RWC) Environment is given Goal is given Existing Knowledge Pre-task learning Task learning General fact For design Acquiring environmentalknowledge Acquiring solution for goal goal No reachingover the wall Acquiringmovable paths Generatingpath to the goal Development Today’s Topic

  34. Demonstration of Autonomous Learning Door & Key task with CITTA Agent acquire knowledge as situation Door can open by the key. Key Telephone Mobile Agent Door Goal Start

  35. Each Situation is used as Module Task Learning Pre-task Learning Combining Matchable Situation ExtractingMatchable Situation Mobile Agent Matchable Situation2 Matchable Situation i Matchable Situation i Open door by telephone Matchable Situation1 Open door by Key Matchable Situation i Go by wall Matchable Situation i Go straight Input/Output Position Action Object Belongings … Environment

  36. Situation Decomposition in AL • SD in Pre-task learning: • Situation decomposition handles input /output vector of two time step for extracts Markov process. • Advantages by SD in Task learning: • Adaptation by combining situations are possible. • Learning data can be reduced, because learning space for each module is reduced.

  37. Conclusions and Future works

  38. Conclusions Cognitive Development Autonomous Learning Task learning Pre-task learning Self-organizingInformation Selection Matchable Principle Situation decomposition Matchability Criterion Situation Decomposition using Matchability Criterion

  39. Conclusions & Future workSituation decomposition • Matchability is new model selection criterion maximizing matching opportunity, which emphasize Coverage for data. In opposition ockham’s razor emphasize the Consistency for data. Decomposed situations by matchability criterion has powerful prediction ability. Situation decomposition method can be applied to pre-processing of data analysis, self-organization, pattern recognition and so on.

  40. Future work • Situation decomposition: • Needs theoretical research on Matchabilty criterion. • This intuitively delivered criterion affected unbalanced data. • Needs speed up for large-scale problem. • Exponential time complexity for number of attribute is awful. • Advanced Self-organized Information Selection • Situation decomposition method only selects set of attributes and cases • Autonomous Learning: • Relates with the knowledge of cognitive science.

More Related