1 / 24

Knowledge Engineering for Bayesian Networks

Knowledge Engineering for Bayesian Networks. Ann Nicholson. School of Computer Science and Software Engineering Monash University. Overview. The BN Knowledge Engineering Process focus on combining expert elicitation and automated methods Case Study I: Seabreeze prediction

cortez
Télécharger la présentation

Knowledge Engineering for Bayesian Networks

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Knowledge Engineering for Bayesian Networks Ann Nicholson School of Computer Science and Software Engineering Monash University

  2. Overview • The BN Knowledge Engineering Process • focus on combining expert elicitation and automated methods • Case Study I: Seabreeze prediction • Case Study II: Intelligent Tutoring System for decimal misconceptions • Conclusions

  3. Elicitation from experts • Variables • important variables? values/states? • Structure • causal relationships? • dependencies/independencies? • Parameters (probabilities) • quantify relationships and interactions? • Preferences (utilities) (for decision networks)

  4. Expert Elicitation Process • These stages are done iteratively • Stops when further expert input is no longer cost effective • Process is difficult and time consuming.

  5. Knowledge discovery • There is much interest in automated methods for learning BNs from data • parameters, structure (causal discovery) • Computationally complex problem, so current methods have practical limitations • e.g. limit number of states, require variable ordering constraints, do not specify all arc directions, don’t handle hidden variables • Evaluation methods

  6. The knowledge engineering process 1. Building the BN • variables, structure, parameters, preferences • combination of expert elicitation and knowledge discovery 2. Validation/Evaluation • case-based, sensitivity analysis, accuracy testing 3. Field Testing • alpha/beta testing, acceptance testing 4. Industrial Use • collection of statistics 5. Refinement • Updating procedures, regression testing

  7. Case Study: Seabreeze prediction • Joint project with Bureau of Meteorology • (Kennet, Korb & Nicholson, PAKDD’2001) • Goal: proof of concept; test ideas about integration of automated learners & elicitation What is a seabreeze? (separate picture)

  8. Rule-based predictor and data • Bureau of Meteorology’s (BOM) system achieved about 67% predictive accuracy; currently in use. If wind component is offshore and wind component < 23 knots and the forecast period is in the afternoon then a sea breeze is likely • Seabreeze Data: • 30MB from October 1997 to October 1999 from Sydney, Australia. 7% had missing attribute values. • Three types of sensor site data: • Automatic weather stations: ground level data, time • Olympic sites (for sailing, etc): rain, temp, humidity, wind • Balloon data: gradient-level readings • Predicted variables: wind speed and direction

  9. Methodology • Expert Elicitation. Using variables with available data, forecasters provided causal relations between them. • Tetrad II (Spirtes, et al., 1993) uses the Verma-Pearl algorithm (1991) with significance testing to recover causal structure. (NB: usability problems) • CaMML (Wallace and Korb, 1999) uses Minimum message Length (MML) to discover causal structure. • BNs for Seabreeze Predictions (see separate slide) • All parameterization was performed by Netica keeping different methods on an equal footing. • Uses simple counting over training data to estimate conditional probabilities (Spiegelhalter & Lauritzen, 1990)

  10. Predictive accuracy • Instead of seabreeze existence prediction, we substituted more demanding task: prediction of wind direction at ground level • From this (and gradient-level wind direction), seabreezes can be inferred. • Training/testing regime • randomly select 80% of data for training • use remainder for testing accuracy • Results • See separate slide (comparison of airport site type network versions)

  11. Predictive accuracy conclusions • Elicited and discovered nets (MML + Tetrad II) are systematically superior to BOM RB • Discovered networks are superior to elicited nets in first 3 hrs (conf intervals are ~10%) • Strong time component to accuracy

  12. Adaptation: Incremental learning • Learn structure from first year’s data (using MML) • Reparameterise nets over second year’s data, while predicting seabreezes • greedy search yielded a time decay factor of e-t0.05 • Results (see separate slides) • comparison of incremental and normal training methods by BN type and by time of year • incremental performed better

  13. Case Study II: Intelligent tutoring • Tutoring domain: primary and secondary school students’ misconceptions about decimals • Based on Decimal Comparison Test (DCT) • student asked to choose the larger of pairs of decimals • different types of pairs reveal different misconceptions • ITS System involves computer games involving decimals • This research also looks at a combination of expert elicitation and automated methods

  14. Expert classification of Decimal Comparison Test (DCT) results

  15. The ITS architecture Adaptive Bayesian Network Inputs Student Generic BN model of student Decimal comparison test (optional) Item Answers Answer • Diagnose misconception • Predict outcomes • Identify most useful information Information about student e.g. age (optional) Computer Games Hidden number Answer Classroom diagnostic test results (optional) Feedback Answer Flying photographer • Select next item type • Decide to present help • Decide change to new game • Identify when expertise gained System Controller Module Item type Item Decimaliens New game Sequencing tactics Number between Help Help …. Report on student Classroom Teaching Activities Teacher

  16. Expert Elicitation • Variables • two classification nodes: fine and coarse • item types: (i) H/M/L (ii) 0-N • Structure • arcs from classification to item type • item types independent given classification • Parameters • careless mistake (3 different values) • expert ignorance: - in table (uniform distribution)

  17. Expert Elicited BN

  18. Evaluation process • Case-based evaluation • experts checked individual cases • sometimes, if prior was low, ‘true’ classification did not have highest posterior (but usually had biggest change in ratio) • Adaptiveness evaluation • priors changes after each set of evidence • Comparison evaluation • Differences in evaluation between BN and expert rule

  19. Comparison: expert BN vs rule Same Undesirable Desirable

  20. Results Same varying prob. of careless mistake Desir. Undes. varying granularity of item type: 0-N and H/M/L

  21. Automated methods: Classification • Applied SNOB classification program, based on MML • Using data from 2437 students, 30 items, SNOB produced 14 classes • 10 corresponded to expert classes • 2 expert classes LRV and AU were not found • 4 clases were mainly combinations of AU and UN • unable to classify 0.5% of students • Using pre-processed data (0-N or H/M/L) on 6 item types, SNOB found only 5 or 6 classes

  22. Automated Methods • Parameters • Again, used Netica counting method • Structure • Applied CaMML to pre-processed data (0-N and H/-M/L) • constrained so that classification node was parent of item type nodes • unconstrained • Many different network structures found, all with arcs between item type nodes, of varying complexity

  23. Results from automated methods

  24. Conclusions • Automated methods yielded BNs which gave quantitative results comparable to or better than elicited BNs • validation of automated methods (?) • Undertaking both elicitation and automated KE resulted in additional domain analysis (e.g. 0-N vs H/M/L) • Hybrid of expert and automated approaches is feasible • methodology for combining is needed • evaluation measures and methods needed (may be domain specific)

More Related