1 / 28

Tadeusz Łuba Institute of Telecommunications

Methods of Logic Synthesis and Their Application in Data Mining Prezentacja wygłaszana na KNU Daegu (Korea), 25.11.2012. Tadeusz Łuba Institute of Telecommunications. Faculty of Electronics and Information Technology . Warsaw University of Technology. Logic synthesis vs. Data Mining.

avak
Télécharger la présentation

Tadeusz Łuba Institute of Telecommunications

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Methods of Logic Synthesis and Their Application in Data MiningPrezentacja wygłaszana na KNU Daegu (Korea), 25.11.2012 Tadeusz Łuba Institute of Telecommunications Faculty of Electronics and Information Technology Warsaw University of Technology

  2. Logic synthesis vs. Data Mining • applicability of the logic synthesis algorithmsin Data Mining • data mining extendstheapplication of LS: • medicine • pharmacology • banking • linguistics • telecommunication • environmentalengineering Data Mining is the process of automatic discovery of significant and previously unknown information from large databases

  3. It is able todiagnose the patient It is able tomake a survey It is able toclassify data It is able todecide of granting a loanto the bank customer Data mining is also called knowledge discovery in databases

  4. Gaining the knowledge from databases at the abstract level of data mining algorithms - itmeansusingtheprocedure of: • reduction of attributes, • generalization of decision rules, • making a hierarchical decision These algorithmsare similar to thoseused in logic synthesis!

  5. Data mining vs. logic synthesis • generalization of decision rules • reduction of attributes • hierarchical decision making • minimization of the Boolean function • reduction of arguments • functional decomposition (rule induction) (logic minimization)

  6. Data mining systems ROSETTA Rough Set Toolkit forAnalysis of Data: Biomedical Centre (BMC), Uppsala, Sweden. http://logic.mimuw.edu.pl/~rses/ http://www.lcb.uu.se/tools/rosetta/

  7. Breast Cancer Database: Diagnosis of breast cancer • Number of instances: 699 training cases • Number of attributes : 10 • Classification (2 classes) Clump Thickness Uniformity of Cell Size Uniformity of Cell Shape …. 9. Mitoses Sources: Dr. WIlliam H. Wolberg (physician); University of Wisconsin Hospital ;Madison; Wisconsin; USA

  8. Breast Cancer database These are the data after discretization

  9. RULE_SET breast_cancer RULES 35 (x9=1)&(x8=1)&(x2=1)&(x6=1)=>(x10=2) (x9=1)&(x2=1)&(x3=1)&(x6=1)=>(x10=2) (x9=1)&(x8=1)&(x4=1)&(x3=1)=>(x10=2) (x9=1)&(x4=1)&(x6=1)&(x5=2)=>(x10=2) ………………….. (x9=1)&(x6=10)&(x1=10)=>(x10=4) (x9=1)&(x6=10)&(x5=4)=>(x10=4) (x9=1)&(x6=10)&(x1=8)=>(x10=4) REDUCTS (27) { x1, x2, x3, x4, x6 } { x1, x2, x3, x5, x6 } { x2, x3, x4, x6, x7 } { x1, x3, x4, x6, x7 } { x1, x2, x4, x6, x7 } ……………. { x3, x4, x5, x6, x7, x8 } { x3, x4, x6, x7, x8, x9 } { x4, x5, x6, x7, x8, x9 }

  10. Increasingrequirements We areoverwhelmed with data! References: [1]

  11. UC Irvine Machine Learning Repository Are existing methods and algorithms for data mining sufficiently efficient? BreastCancerDatabase – 10 attr Audiology Database– 71 attr Dermatology Database– 34 attr Why does it take place? How these algorithms can be improved?

  12. Classic method Discernibility matrix (DM) Discernibility function (DF) Conjunction of clauses Clause is a disjunction of attributes References: [9] The key issue is to transform the DF: NP- HARD CNF DNF Every monomial corresponds to a reduct

  13. The method can be significantlyimproved ... by using the typical logic synthesis procedure for Boolean function complementation Instead of transforming CNF to DNF we represent CNF in binary matrix M BM is treated as Boolean functionF Complement of function F F is always unate function!

  14. Using Complement Theorem… M: fM = x1x2x4 + x3x4 + x1x2 + x1x4 = x1x4 + x3x4 + x1x2 .i 4 .o 1 .p 4 11-1 1 --11 1 11-- 1 1--1 1 .end The same result! Discernibility function F = (x1 + x2 + x4) (x3 + x4) (x1 + x2)(x1 + x4) = (x1 + x2)(x1 + x4) (x3 + x4) = (x1 + x2x4) (x3 + x4) = x1x3+ x2x4 + x1x4

  15. The key element Fast ComplementationAlgorithm F = RecursiveComplementationTheorem: whereiscalledthecofactor of F withrespect to variablexj The problem of complementingfunctionF istransformedintothe problem of findingthecomplements of twosimplercofactors

  16. UnateComplementation The entire process reduces to three simple calculations: • The choice of the splitting variable • Calculation of Cofactors • Testing rules for termination F = xjF0 + F1 Matrix M Cofactor 1 Cofactor 0 …. …. Cofactor Cofactor Complement • Merging Complement Complement

  17. An Example F = x4 x6(x1 + x2) (x3 + x5 + x7)(x2 + x3)(x2 + x7) .i 7 .o 1 .p 6 11----- 1 --1-1-1 1 -11---- 1 -1----1 1 ---1--- 1 -----1- 1 .end

  18. An Example… x2, x3, x2, x5, x2, x7, x1, x3, x7 Reducts: {x2,x3,x4,x6} {x2,x4,x5,x6} {x2,x4,x6,x7} {x1,x3,x4,x6,x7}

  19. Verification Calculating reducts using the standard method: (x1 + x2) (x3 + x5 + x7) (x2 + x3) (x2 + x7) = = (x2 +x1)(x2 + x3)(x2 + x7)(x3 + x5 + x7) = (x3 + x5 + x7) = =(x2 +x1x3x7) = x2x3+ x2x5 +x2x7 + x1x3x7  {x4,x6} {x2,x4,x6,x7} {x2,x4,x5,x6} {x2,x3,x4,x6} The same set of reducts! {x1,x3,x4,x6,x7}

  20. Boolean function KAZ .type fr .i 21 .o 1 .p 31 100110010110011111101 1 111011111011110111100 1 001010101000111100000 1 001001101100110110001 1 100110010011011001101 1 100101100100110110011 1 001100100111010011011 1 001101100011011011001 1 110110010011001001101 1 100110110011010010011 1 110011011011010001100 1 010001010000001100111 0 100110101011111110100 0 111001111011110011000 0 101101011100010111100 0 110110000001010100000 0 110110110111100010111 0 110000100011110010001 0 001001000101111101101 0 100100011111100110110 0 100011000110011011110 0 110101000110101100001 0 110110001101101100111 0 010000111001000000001 0 001001100101111110000 0 100100111111001110010 0 000010001110001101101 0 101000010100001110000 0 101000110101010011111 0 101010000001100011001 0 011100111110111101111 0 .end All solutions : Withthesmallestnumber of arguments: 35, With the minimum number of arguments: 5574 01010 1 10110 1 00100 1 01001 1 01000 1 11010 1 10011 0 01110 0 10100 0 11000 0 11011 0 10000 0 00010 0 01111 0 00011 0 11111 0 00000 0 01101 0 00110 0 Computation time: RSES = 70 min. Proposedmethod= 234 ms. 18000 timesfaster!

  21. Conclusion The new method reduces the computation time a couple of orders of magnitude RSES How this acceleration will affect the speed of calculations for typical databases?

  22. Experimental results The absolute triumphof the complementation method!

  23. Further possibilities… of application of logic synthesis methods in issues of Data Mining • generalization of decision rules • hierarchical decision making • minimization of the Boolean function • Functional decomposition

  24. RSES vs Espresso ESPRESSO RSES .i 7 .o 1 .type fr .p 9 1000101 0 1011110 0 1101110 0 1110111 0 0100101 1 1000110 1 1010000 1 1010110 1 1110101 1 .e TABLE extlbis ATTRIBUTES 8 x1 numeric 0 x2 numeric 0 x3 numeric 0 x4 numeric 0 x5 numeric 0 x6 numeric 0 x7 numeric 0 x8 numeric 0 OBJECTS 9 1 0 0 0 1 0 1 0 1 0 1 1 1 1 0 0 1 1 0 1 1 1 0 0 1 1 1 0 1 1 1 0 0 1 0 0 1 0 1 1 1 0 0 0 1 1 0 1 1 0 1 0 0 0 0 1 1 0 1 0 1 1 0 1 1 1 1 0 1 0 1 1 (x1=1)&(x5=1)&(x6=1)&(x2=1)=>(x8=0) (x1=1)&(x2=0)&(x5=1)&(x3=0)&(x4=0)&(x6=0)=>(x8=0) (x4=0)&(x1=1)&(x2=0)&(x7=0)=>(x8=1) (x2=1)&(x4=0)&(x5=1)&(x6=0)=>(x8=1)

  25. Hierarchical decision making Is it possible to use the decomposition to solve difficult tasks of data mining? attributes attributes A B DT (G) Decomposition decisiontable F = H(A,G(B)) intermediatedecision DT (H) G  P(B) P(A)  G  PD final decision decision

  26. Data compression democrat n y y n y y n n n n n n y y y y republican n y n y y y n n n n n y y y n y democrat y y y n n n y y y n y n n n y y democrat y y y n n n y y y n n n n n y y democrat y n y n n n y y y y n n n n y y democrat y n y n n n y y y n y n n n y y democrat y y y n n n y y y n y n n n y y republican y n n y y n y y y n n y y y n y ……………………….......... democrat y y y n n n y y y n y n n n y y republican y y n y y y n n n n n n y y n y republican n y n y y y n n n y n y y y n n democrat y n y n n n y y y y y n y n y y democrat y n y n n n y y y n n n n n n y democrat y n y n n n y y y n n n n n y y G The data set HOUSE (1984 United States Congressional Voting Records Database) Decomposition Decomposition H 68% space reduction

  27. Summary • Typical logic synthesis algorithms and methods are effectively applicable to seemingly different modern problems of data mining • Also, it is important to study theoretical foundations of new concepts in data mining e.g. functional decomposition • Solving these challenges requiresthe cooperation of specialists from different fields of knowledge

  28. Abdullah, S., Golafshan, L., Mohd Zakree Ahmad Nazri: Re-heat simulated annealing algorithm for rough set attribute reduction. International Journal of the Physical Sciences 6(8), 2083–2089 (2011) Agrawal, R., Mannila, H., Srikant, R., Toivonen, H., Verkamo, A.I.: Fast Discovery of Association Rules. In: Advances in KDD, pp. 307–328. AAAI,Menlo Park (1996) An, A., Shan, N., Chan, C., Cercone, N. and Ziarko, W. Discovering rules for water demand prediction: an enhanced rough-set approach, Engineering Application and Articial Intelligence, 9, 645-653, 1996. Bazan, J., Nguyen, H.S., Nguyen, S.H., Synak, P., Wróblewski, J.: Rough set algorithms in classification problem. In: Rough Set Methods and Applications: New Developments in Knowledge Discovery in Information Systems, vol. 56, pp. 49–88. Physica-Verlag, Heidelberg (2000) Bazan, J., Skowron, A., Synak, P.: Dynamic Reducts as a Tool for Extracting Laws from Decision Tables. In: Ra´s, Z.W., Zemankova, M. (eds.) ISMIS 1994. LNCS (LNAI), vol. 869, pp. 346–355. Springer, Heidelberg (1994) Bazan, J.G., Szczuka, M.S.: RSES and RSESlib - A Collection of Tools for Rough Set Computations. In: Rough Sets and Current Trends in Computing, pp. 106–113 (2000) Bazan, J.G., Nguyen, H.S., Nguyen, S.H., Synak, P. and Wroblewski, J. Rough set algorithms in classi¯cation problem, in: Polkowski, L., Tsumoto, S. and Lin, T.Y. (Eds.), Rough Set Methods and Applications, 49-88, 2000. 32 Information Sciences, 178(17), 3356-3373, Elsevier B.V., 2008. Beynon, M. Reducts within the variable precision rough sets model: a further investigation, European Journal of Operational Research, 134, 592-605, 2001. Borowik, G., Łuba, T., Zydek, D.: Features Reduction using logic minimization techniques. In: Intl. Journal of Electronics and Telecommunications, vol. 58, No.1, pp. 71-76, (2012) Brayton, R.K., Hachtel, G.D., McMullen, C.T., Sangiovanni-Vincentelli, A.: Logic Minimization Algorithms for VLSI Synthesis. Kluwer Academic Publishers (1984) Brzozowski, J.A., Łuba, T.: Decomposition of boolean functions specified by cubes. Journal of Multi-Valued Logic & Soft Computing 9, 377–417 (2003) Dash, R., Dash, R., Mishra, D.: A hybridized rough-PCA approach of attribute reduction for high dimensional data set. European Journal of Scientific Research 44(1), 29–38 (2010) Feixiang, Z., Yingjun, Z., Li, Z.: An efficient attribute reduction in decision information systems. In: International Conference on Computer Science and Software Engineering. pp. 466–469. Wuhan, Hubei (2008), DOI: 10.1109/CSSE.2008.1090 Grzenda, M.: Prediction-Oriented Dimensionality Reducition of Industrial Data Sets. in: Modern Approaches in Applied Inteligence, Mehrotra, K.G.; Mohan, C.K.; Oh, J.C.; Varshney, P.K.; Ali, M. (Eds.), LNAI 6703, 232-241 (2011) Hedar, A.R., Wang, J., Fukushima, M.: Tabu search for attribute reduction in rough set theory. Journal of Soft Computing – A Fusion of Foundations, Methodologies and Applications 12(9), 909–918 (Apr 2008), DOI: 10.1007/s00500-007-0260-1 Herbert, J.P. and Yao, J.T. Rough set model selection for practical decision making, Proceedings of the 4th International Conference on Fuzzy Systems and Knowledge Discovery, 203-207, 2007. Huhtala, Y., Karkkainen, J., Porkka, P., Toivonen, H.: TANE: An Efficient Algorithm for Discovering Functional and Approximate Dependencies. The Computer Journal 42(2), 100–111 (1999) Inuiguch, M. Several approaches to attribute reduction in variable precision rough set model, Modeling Decisions for Arti¯cial Intelligence, 215-226, 2005. Jelonek, J., Krawiec, K., Stefanowski, J.: Comparative study of feature subset selection techniques for machine learning tasks. In: Proceedings of IIS, Malbork, Poland, pp. 68–77 (1998) Jensen R., Shen Q. Semantics-preserving dimensionality reduction: Rough and fuzzy rough-based approaches. IEEE Transactions on Knowledge and Data Engineering, vol. 16, pp. 1457–1471, (2004) Jing, S., She, K.: Heterogeneous attribute reduction in noisy system based on a generalized neighborhood rough sets model. World Academy of Science, Engineering and Technology 75, 1067–1072 (2011) Kalyani, P., Karnan, M.: A new implementation of attribute reduction using quick relative reduct algorithm. International Journal of Internet Computing 1(1), 99–102 (2011) Katzberg, J.D. and Ziarko, W. Variable precision rough sets with asymmetric bounds, in: W. Ziarko (Ed.) Rough Sets, Fuzzy Sets and Knowledge Discovery, Springer, London, 167-177, 1994. Kryszkiewicz, M., Cicho´n, K.: Towards scalable algorithms for discovering rough set reducts. In: Peters, J., Skowron, A., Grzyma la-Busse, J., Kostek, B., Świniarski, R., Szczuka, M. (eds.) Transactions on Rough Sets I, Lecture Notes in Computer Science, vol. 3100, pp. 120–143. Springer Berlin / Heidelberg, Berlin (2004), DOI: 10.1007/978-3-540-27794-1 5   Pawlak, Z. and Skowron, A. Rudiments of rough sets, Information Sciences, 177, 3-27, 2007. Pei, X., Wang, Y.: An approximate approach to attribute reduction. International Journal of Information Technology 12(4), 128–135 (2006) Rawski, M., Borowik, G., Łuba, T., Tomaszewicz, P., Falkowski, B.J.: Logic synthesis strategy for FPGAs with embedded memory blocks. Electrical Review 86(11a), 94–101 (2010) Shan, N., Ziarko, W., Hamilton, H.J., Cercone, N.: Discovering Classification Knowledge in Databases Using Rough Sets. In: Proceedings of KDD, pp. 271–274 (1996) Skowron, A.: Boolean Reasoning for Decision Rules Generation. In: Komorowski, J., Ra´s, Z.W. (eds.) ISMIS 1993. LNCS, vol. 689, pp. 295–305. Springer, Heidelberg (1993) Skowron, A., Rauszer, C.: The discernibility matrices and functions in information systems. In: Słowiński, R. (ed.) Intelligent Decision Support – Handbook of Application and Advances of the Rough Sets Theory. Kluwer Academic Publishers (1992) Slezak, D.: Approximate Reducts in Decision Tables. In: Proceedings of IPMU, Granada, Spain, vol. 3, pp. 1159–1164 (1996) Slezak, D.: Searching for Frequential Reducts in Decision Tables with Uncertain Objects. In: Polkowski, L., Skowron, A. (eds.) RSCTC 1998. LNCS, vol. 1424, pp. 52–59. Springer, Heidelberg (1998) Slezak, D.: Association Reducts: Complexity and Heuristics. In: Greco, S., Hata, Y., Hirano, S., Inuiguchi, M., Miyamoto, S., Nguyen, H.S., S_lowi´nski, R. (eds.) RSCTC 2006. LNCS, vol. 4259, pp. 157–164. Springer, Heidelberg (2006) Slezak, D. and Ziarko, W. Attribute reduction in the Bayesian version of variable precision rough set model, Electronic Notes in Theoretical Computer Science, 82, 263-273, 2003. Słowinski, R. (ed.): Intelligent Decision Support, Handbook of Applications and Advances of the Rough Sets Theory, vol. 11. Kluwer Academic Publishers, Dordrecht (1992) Słowiński, K., Sharif, E.: Rough Sets Analysis of Experience in Surgical Practice. International Workshop: Rough Sets: State of The Art and Perspectives, Poznan-Kiekrz (1992) Stepaniuk, J.: Approximation Spaces, Reducts and Representatives. In: Rough Sets in Data Mining and Knowledge Discovery. Springer, Berlin (1998) Swiniarski, R.W. Rough sets methods in feature reduction and classi¯cation, International Journal of Applied Mathematics and Computer Science, 11, 565- 582, 2001. Swiniarski, R.W. and Skowron, A. Rough set methods in feature selection and recognition, Pattern Recognition Letters, 24, 833-849, 2003 Wang, C., Ou, F.: An attribute reduction algorithm based on conditional entropy and frequency of attributes. In: Proceedings of the 2008 International Conference on Intelligent Computation Technology and Automation. ICICTA ’08, vol. 1, pp. 752–756. IEEE Computer Society, Washington, DC, USA (2008), DOI: 10.1109/ICICTA.2008.95 Wang, G., Yu, H. and Yang, D. Decision table reduction based on conditional information entropy, Chinese Journal of Computers, 25, 759-766, 2002. Wang, G.Y., Zhao, J. and Wu, J. A comparitive study of algebra viewpoint and information viewpoint in attribute reduction, Foundamenta Informaticae, 68, 1-13, 2005. Wróblewski, J.: Finding Minimal Reducts Using Genetic Algorithms. In: Proceedings of JCIS,Wrightsville Beach, NC, September/October 1995, pp. 186–189 (1995) Wu, W.Z., Zhang, M., Li, H.Z. and Mi, J.S. Knowledge reduction in random information systems via Dempster-Shafer theory of evidence, Information Sciences, 174, 143-164, 2005. Yao, Y., Zhao, Y.: Attribute reduction in decision-theoretic rough set models. Information Sciences 178(17), 3356–3373 (2008), DOI: 10.1016/j.ins.2008.05.010 Zhang, W.X., Mi, J.S. and Wu, W.Z. Knowledge reduction in inconsistent information systems, Chinese Journal of Computers, 1, 12-18, 2003. Zhao, Y., Luo, F., Wong, S.K.M. and Yao, Y.Y. A general definition of an attribute reduction, Proceedings of the Second Rough Sets and Knowledge Technology, 101-108, 2007. ROSE2 – Rough Sets Data Explorer, http://idss.cs.put.poznan.pl/site/ rose.html ROSETTA – A Rough Set Toolkit for Analysis of Data, http://www.lcb.uu.se/ tools/rosetta/ RSES – Rough Set Exploration System, http://logic.mimuw.edu.pl/~rses/ Skowron, A., Rauszer, C.: The discernibility matrices and functions in information systems. In: Słowiński, R. (ed.) Intelligent Decision Support – Handbook of Application and Advances of the Rough Sets Theory. Kluwer Academic Publishers (1992) Tadeusiewicz R.: Rola technologii cyfrowych w komunikacji społecznej oraz w kulturze i edukacji. PPT presentation. ROSETTA – A Rough Set Toolkit for Analysis of Data, http://www.lcb.uu.se/ tools/rosetta/ RSES – Rough Set Exploration System, http://logic.mimuw.edu.pl/~rses/ Borowik, G., Łuba, T., Zydek, D.: Features Reduction using logic minimization techniques. In: Intl. Journal of Electronics and Telecommunications, vol. 58, No.1, pp. 71-76, (2012) References

More Related