1 / 38

Monitoring IVHM Systems using a Monitor-Oriented Programming Framework

Monitoring IVHM Systems using a Monitor-Oriented Programming Framework. S. Ghoshal, S. Manimaran - QSI G. Rosu , T. Serbanuta, G. Stefanescu - UIUC. IVHM System Analysis. IVHM systems pose significantly higher safety and dependability requirements than most other systems

Télécharger la présentation

Monitoring IVHM Systems using a Monitor-Oriented Programming Framework

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Monitoring IVHM Systems using a Monitor-Oriented Programming Framework S. Ghoshal, S. Manimaran - QSI G. Rosu, T. Serbanuta, G. Stefanescu - UIUC

  2. IVHM System Analysis • IVHM systems pose significantly higher safety and dependability requirements than most other systems • Formal analysis of IVHM systems is therefore highly desirable … • … but also challenging, due to their highly integrated nature (different technologies, hardware, software, sensors, etc.) and combined complexity

  3. Overview • Our Approach • MOP (University of Illinois at Urbana) • TEAMS (Qualtech Systems Inc.) • Project Research Plan • Conclusion and Future Work

  4. Our Approach to IVHM Analysis • Separation of concerns • State “health” assessment, or diagnosis • Temporal behaviors of state sequences TEAMS Violation / Validation IVHM System MOP Temporal behavior monitor Model-based observation Abstract events/states Steering / Recovery

  5. Overview • Our Approach • MOP (University of Illinois at Urbana) • TEAMS (Qualtech Systems Inc.) • Project Research Plan • Conclusion and Future Work

  6. Monitoring-Oriented Programming (MOP)http://fsl.cs.uiuc.edu/mop- proposed in 2003 –RV’03, ICFEM’04, RV’05, CAV’05, TACAS’05, CAV’06, CAV’07, OOPSLA’07, ICSE08, … logic plugins MOP … CFG ptLTL LTL ERE ptCaRet JavaMOP languages BusMOP …

  7. What is MOP? • Framework for reliable software development • Monitoring is basic design discipline • … rather than “add on” grafted onto existing code • Recovery allowed and encouraged • Provides to programmers and hides under the hood a large body of formal methods knowledge/techniques • Monitor synthesis algorithms • Generic for different languages and application domains

  8. Monitor if vector is sorted Insertion sort Heap sort no O(n log(n)) O(n2) provably correct yes O(n) Example: Correct and efficient sorting Works in MOP Need to show it does not destroy the multiset We have an efficient and provably correct sorting algorithm! We avoided proving heap sort correct, which is hard!

  9. MOP Example: “Authentication before use” begin end begin end use authenticate Execution 1, correct Execution 2, incorrect

  10. MOP Example: “Authentication before use” class Resource { /*@ class-scoped SafeUse() { event authenticate : after(exec(* authenticate())) event use : before(exec(* access())) ptltl: use -> <*> authenticate } @*/ void authenticate() {...} void access() {...} ... }

  11. MOP Example: “Enforce authentication before use” begin end begin end use authenticate Execution 1, correct Call authenticate() Execution 2, incorrect but corrected

  12. MOP Example: “Enforce authentication before use” class Resource { /*@ class-scoped SafeUse() { event authenticate : after(exec(* authenticate())) event use : before(exec(* access())) ptltl : use -> <*> authenticate violation { @this.authenticate(); } } @*/ void authenticate() {...} void access() {...} ... }

  13. MOP Example: “Correcting method matching” Method openRegKey should be followed by closeRegKey, not by closeHandle /*@ class-scoped SafeClose() { event openRegKey : after(exec(* openRegKey())) event closeHandle : before(exec(* closeHandle())) event closeRegKey : before(exec(* closeRegKey())) ere : any* openRegKey closeHandle validation { @this.closeRegKey(); return; } } @*/

  14. MOP Example: ProfilingHow many times a file is open, written to, and then closed? /*@ class-scoped FileProfiling() { [ int count = 0; int writes = 0;] event open : after(call(* open(..))) {writes = 0;} event write : after(call(* write(..))) {writes ++;} event close : after(call(* close(..))) ere : (open write+ close)* violation { @RESET; } validation { count ++; File2.log(count + ": " + writes); } } @*/

  15. Fail Fast Iterators • Following code throws exception in Java: Vector v = new Vector(); v.add(new Integer(1)); Iterator i = v.iterator(); v.add(new Integer(2)); • No exception raised if one uses Enumeration instead of Iterator • Java language decision, showing that properties referring to sets of objects are important

  16. MOP Example:Safe Enumeration • Basic safety property: • If nextElement() invoked on an enumeration object, then the corresponding collection (vector) is not allowed to change after the creation of the enumeration object

  17. MOP Example:Safe Enumeration /*@ globalvalidation SafeEnum(Vector v, Enumeration+ e) { event create<v,e> : after(call(Enumeration+.new(v, ..))) returning e event updatesource<v> : after(call(* v.add*(..))) \/ … event next<e> : before(call(Object e.nextElement())) ere : create next* updatesource+ next) } @*/ AspectJ code generated from the above: ~700 LOC

  18. MOP Example:Safe Locking Policy Each lock should be released as many times as it was acquired

  19. MOP Example:Safe Locking /*@ method-scoped SafeLock(Lock l) { event acquire<l> : before(call(* l.acquire())) event release<l> : before(call(* l.release())) cfg : S -> epsilon | S acquire S release } @*/

  20. MOP Approach to Monitoring • Keep the following distinct and generic: • specification formalisms • event definitions • validation handlers

  21. MOP Distinguished Features: Extensible logic framework • Observation: no silver-bullet logic for specs • MOP logic plugins (the “How”): encapsulate monitor synthesizers; so far we have plugins for • ERE(extended regular expressions), PtLTL(Past-time LTL), FtLTL(Future-time LTL), ATL(Allen temporal logic), JML(Java modeling language), PtCaRet (Past-time Call/Return), CFG (Context-free grammars) • Generic universal parameters • Allow monitor instances per groups of objects

  22. MOP Distinguished Features: Configurable monitors Working scope • Check point: check spec at defined place • Method: within a method call • Class: check spec everywhere during obj lifetime • Interface: check spec at boundaries of methods • Global: may refer to more than one object Running mode • Inline: shares resources with application • Outline: communicates with application via sockets • Offline: generated monitor has random access to log

  23. MOP Distinguished Features: Decentralized Monitoring/Indexing • The problem: how to monitor a universally quantified specification efficiently! create<v,e> udatesource<v> next<e> create next* updatesource+ next ( v,e)

  24. Decentralized Monitoring Monitor instances (one per parameter instance) Mp3 Mp1 Mp2 … Mp1000

  25. Mv,e2 Mp1 Mv,e1 … Mp1000 Indexing … • The problem: how can we retrieve all needed monitor instances efficiently? udatesource<v> Naïve implementation very inefficient (both time- and memory-wise)

  26. MOP’s Decentralized Indexing • Monitors scattered all over the program • Monitor states piggybacked to object states • Weak references SafeEnum events create<v,e> udatesource<v> next<e> MOP – Grigore Rosu

  27. MOP: Evaluation SafeEnum SafeIterator HashMap HasNext LeakingSync ClosedReader antlr 0.0 1.5 0.0 0.0 0.0 1.1 0.4 0.0 0.0 0.0 5.8 0.0 chart 0.0 0.0 0.0 0.0 3.6 4.8 0.0 0.0 0.5 0.0 0.0 0.0 eclipse 4.1 2.8 0.0 1.4 3.7 0.5 3.8 1.5 3.0 3.1 2.2 2.4 fop 1.2 0.6 1.5 0.0 0.0 0.0 0.8 1.5 0.5 1.0 0.0 0.0 hsqldb 3.3 0.0 0.9 1.2 0.0 2.1 0.8 0.0 1.4 1.4 0.0 0.0 jython 0.6 0.0 0.8 0.5 0.2 0.3 0.0 0.6 0.0 2.3 0.4 0.2 luindex 1.6 0.2 1.9 0.5 1.2 1.8 0.3 0.0 3.2 2.2 1.7 1.1 lusearch 0.5 0.0 0.0 0.0 0.0 0.0 0.3 0.0 1.1 0.6 0.0 0.1 pmd 0.0 0.0 44.8 11.3 0.0 0.0 25.4 13.7 5.4 8.0 0.0 0.0 xalan 3.5 4.4 6.7 5.4 4.7 6.5 0.0 2.8 1.5 1.7 2.2 4.5 Overhead in % MOP monitors VS. hand-optimized monitors • More than 100 program-property pairs • Dacapo benchmark, Tracematches benchmark, Eclipse, … • Overhead < 8% in most cases; close to hand-optimized

  28. MOP: Evaluation (cont.) Property Program LOC Hand Optimized MOP Tracematches PQL Listener ajHotDraw 21.1K 0 6.6 354 2193 SafeEnum jHotDraw 9.5K 0.1 136 1509 7084 NullTrack CerRevSim 1.4K 210 232 452 N/A Hashtable Weka 9.9K 3.3 3.3 15.2 N/A HashSet Aprove 438.7K 21.2 23.9 124.3 N/A Reweave ABC 51.2K 11.1 20.2 63.5 N/A Results for Tracematches benchmarks, Overhead in % • Even significantly faster than logic specific solutions

  29. Overview • Our Approach • MOP (University of Illinois at Urbana) • TEAMS (Qualtech Systems Inc.) • Project Research Plan • Conclusion and Future Work

  30. QSI’s TEAMS Model-based diagnosis system TEAMS model = dependency model capturing relationships: failure modes  observable effects QSI’s TEAMS Tool Set TEAMS Designer: help create models TEAMS-RT: processing data in real time TEAMATE: infers health status + optimal tests TEAMS-RDS: remote diagnostic server

  31. TEAMS Designer Help users create models (model can also be imported) Capture component and data dependency + other aspects that allow efficient diagnosis Model = hierarchical multi-layered directed graph Node: physical component Test-point: “observation” node Edge: cause-effect dependency

  32. Overview • Our Approach • MOP (University of Illinois at Urbana) • TEAMS (Qualtech Systems Inc.) • Project Research Plan • Conclusion and Future Work

  33. Project Objectives • Develop tools, techniques and ultimately an integrated framework for IVHM system monitoring, control and verification • Show that runtime verification and monitoring can play a crucial role in the development of safe, robust, reliable, scalable and operational IVHM systems

  34. Project Plan • TEAMS: capture system “health” • MOP: generate and integrate monitors • Integrated system: check IVHM system at runtime, steering if failures are detected TEAMS Violation / Validation IVHM System MOP Temporal behavior monitor Model-based observation Abstract events/states Steering / Recovery

  35. What is done: TEAMS sideCase study: B-737 Autoland • With data provided by Celeste M. Belcastro and Kenneth Eure, a model for B-737 is being developed

  36. What is done: MOP sideCase study: B-737 Autoland • Two new logic plugins • Context-free patterns • Past-time LTL with Calls/Returns • (still missing timed logic plugins) • Improved monitor garbage collection • Current MOP more than an order of magnitude faster than other RV systems

  37. Overview • Our Approach • MOP (University of Illinois at Urbana) • TEAMS (Qualtech Systems Inc.) • Project Research Plan • Conclusion and Future Work

  38. Conclusion and Future Work • Discussed initial steps towards integrated framework for IVHM system monitoring, control and verification • Separation of concerns • Observation / diagnosis of system “health” • Monitoring of temporal behaviours • A lot to be done • Complete TEAMS model for B-737 autoland • Automate integration of TEAMS and MOP

More Related