1 / 30

Property Assurance in Middleware for Distributed Real-Time Systems*

Property Assurance in Middleware for Distributed Real-Time Systems*. Christopher Gill cdgill@cse.wustl.edu Department of Computer Science and Engineering Washington University, St. Louis, MO. Seminar at Stanford University Thursday, March 15, 2007.

Télécharger la présentation

Property Assurance in Middleware for Distributed Real-Time Systems*

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Property Assurance in Middlewarefor Distributed Real-Time Systems* Christopher Gill cdgill@cse.wustl.edu Department of Computer Science and Engineering Washington University, St. Louis, MO Seminar at Stanford University Thursday, March 15, 2007 *Research supported in part by NSF CAREER award CCF-0448562 Joint work with Venkita Subramonian, César Sánchez, Henny Sipma, and Zohar Manna

  2. client server image side side source displays virtual folder, images transmission adaptation middleware middleware low (often variable) bandwidth radio link A Motivating Example: Real-Time Image Transmission • Chains of end-to-end tasks • E.g., compress, transmit, decompress, analyze, and then display images • Property assurance is crucial • Soft real-time constraints • Deadlock freedom • Many applications have similar needs • Is correct reuse of middleware possible? Console [Gill et al., “Integrated Adaptive QoS Management in Middleware: An Empirical Case Study” (RTAS ‘04)] [Wang et al., “CAMRIT: Control-based Adaptive Middleware for Real-time Image Transmission” (RTAS ‘04)] Camera

  3. TAO+CIAO ACE Middleware for Distributed Real-Time Systems A “Distributed System Software Stack” • Layered stacks of mechanisms • thread, port, socket, timer • reactor, monitor • client, server, gateway, ORB • Task chains span multiple hosts • may be initiated asynchronously • Limited host resources • used by multiple task chains

  4. One Widely Used Mechanism: Reactor Socket Reactor Application read() handle_input() data arrival select() Event Handlers handle_input() read() Read Handle Set Reactor abstraction has many variations: select () vs. WaitForMultipleObjects() single thread vs. thread pool unsynchronized vs. mutex vs. readers-writer …

  5. An Illustration of Inherent Complexity Wait-on-Reactor • Handler waits in reactor for reply • E.g., set read_mask, call select() again • Other requests can be processed while replies are still pending • For efficiency, call stack remembers handler continuation • Intervening requests may delay reply processing (LIFO semantics) Wait-on-Connection • Handler waits on socket connection for the reply • Blocking call to recv() • One less thread listening on the Reactor for new requests • Exclusive handling of the reply • However, may cause deadlocks if reactor upcalls are nested Two essential research questions: How can we represent and analyze such diverse behavior? How can we enforce properties that span hosts, efficiently?

  6. Two Essential Technical Objectives • A principled basis for middleware verification • Model each mechanism’s inherent complexity accurately • Remove unnecessary complexity through abstraction • Compose models tractably and with high fidelity to system itself • New protocols and mechanisms for property enforcement • Exploit call graph structure and other domain-specific information • Develop efficient local mechanisms for end-to-end enforcement • Design frameworks to support entire families of related protocols

  7. Model Architecture in IF for ACE • Network/OS layer: inter-process communication abstractions • Middleware layer: ACE pattern-oriented abstractions • Application layer: application-specific semantics within ACE event handlers

  8. Modeling Threads • Challenge • No native constructs for threads in model checkers that currently support timed automata • Option 1: model all thread actions as a single automaton • Suitable for high level modeling of application semantics • Option 2: model a thread as multiple interacting automata • Interactions model the flow of control • This option better abstracts the nuances of ACE-level mechanisms Foo Bar input method_request output method_request input method_result output method_result

  9. 2 automata per thread Modeling Thread Scheduling Semantics (1/4) Activity1 Activity2 Update Display Control Flow Rate • Easy to achieve with one automaton per thread • Specify to model checker directly • E.g., using IF priority rules • More difficult with more than one automaton per thread • Thread of control spans interactions among automata • Need to express thread scheduling in terms of execution control primitives provided by the model checker 1 automaton per thread prio_rule: pid1 < pid2 if pid1 instanceof Activity1 and pid2 instanceof Activity2 Foo Bar input m_req output m_req input m_result output m_result

  10. Modeling Thread Scheduling Semantics (2/4) Resulting Behavior Bar1 Foo1 Foo1 Bar1 1 2 Bar2 Foo2 Bar1 Foo1 Prio=5 Prio=8 • Solution • Introduce a thread id that is propagated along automata interactions • Thread id acts as an index to a storage area which holds each thread’s scheduling parameters Foo2 Bar2 2 1 thread_schedule: pid1 < pid2 if pid1 instanceof Foo1 and pid2 instanceof Bar1 and ({Foo1}pid1).threadid <> ({Bar1}pid2).threadid and ({Thread}(({Foo1}pid1).threadid)).prio < ({Thread}(({Bar1}pid2).threadid)).prio ) Hint to the model checker Give higher preference to the automaton whose “thread” (pointed to by thread id) has higher priority

  11. Modeling Thread Scheduling Semantics (3/4) Bar1 Foo1 Foo2 Foo3 Foo1 Bar2 Foo2 Bar1 Foo1 • What if two threads have the same priority? • In an actual implementation, run-to-completion (SCHED_FIFO) may control the possible interleavings • How can we model run-to-completion? Bar3 Foo3 Bar1 Bar2 Bar3 Bar1 Foo1 Bar1 How do we prune out this space? Bar2 Foo2 Foo3 Bar3

  12. Modeling Thread Scheduling Semantics (4/4) • Solution • Record id of currently executing thread • Update when executing actions in each automaton Current=nil Foo1 Bar1 Current=1 Current=2 Foo2 Bar1 Current=1 Foo3 Bar1 Current=1 Bar1 Hint to the model checker Give higher preference to the automaton whose thread is the currently running thread. Non-deterministic choice if Current is nil Current=2 Bar2 Current=2 Bar3 Current=2

  13. Problem: Over-constraining Concurrency Hint to the model checker Current=nil Give higher preference to the automaton whose thread is the currently running thread. Non-deterministic choice if Current is nil Foo1 Bar1 Current=1 Current=2 Foo2 Bar1 Current=1 Foo3 Bar1 Current=1 Bar1 Current=2 Bar2 Current=2 Bar3 always chosen to run Bar3 Bar3 Current=2 Current=2 Time progresses Foo3

  14. Solution: Idle Catcher Automaton Foo3, Bar3 blocked at this point Current=2 Idle catcher runs Current=nil Time progress Current=nil Foo3 Bar3 • Key idea: lowest priority “catcher” runs when all others are blocked • E.g., catcher threadin middleware group scheduling (RTAS ‘05) • Here, idle catcher automaton • runs when all other automata are idle (not enabled), but before time progresses • Resets value of current id to nil Foo3 or Bar3 could be chosen to run. Over-constraining eliminated

  15. Problem: Tractability “right away” “maybe tomorrow?” “go for an espresso” “get coffee” “in a minute” • Model checking can suffer from state space explosion • State space reduction, live variable analysis can help • But even good model checkers don’t fully solve this • Need to think of modeling as a design issue, too • Does the model represent what it needs to represent? • Can the model be re-factored to help the checker? • Can domain specific information help avoid unnecessary checking?

  16. Optimization 1: Leader Election Token to access the reactor is available T1 T3 T2 T2 T3 T3 • Leader/Followers concurrency • Threads in a reactor thread pool take turns waiting on the reactor • One thread gets the “token” to access the reactor - leader • All other threads wait for the token – followers • It does not matter which thread gets selected as leader in a threadpool • Model checker not aware of this domain specific semantics • For BASIC-P protocol example, saved factor of ~50 in state space, and factor of ~20 in time Prune this out

  17. Optimization 2: System Initialization S1 creates R S2 creates R 1 1 S1 R S2 R S2 creates R S1 creates R 1 1 S1 R S2 R • Similar idea, but different technique • Iff ok to establish initial object relations in any order, can optimize away • E.g., 2 server automata, each of which creates a reactor automaton • Useful when modeling object systems in model checkers with dynamic automaton creation capability (e.g., IF) • State space reduction depends on application • Factor of ~250 for a deadlock scenario with 2 reactors and 3 threads in each reactor 2 2 S2 R S1 R Prune this out

  18. Verification of a Real-Time Gateway Consumer1 Supplier1 Consumer2 Gateway Consumer3 Supplier2 Consumer4 • An exemplar of many realistic ACE-based applications • We modified the Gateway example to add new capabilities • E.g., Real time, Reliability, Control-Push-Data-Pull • Value added service in Gateway before forwarding a to consumer • E.g. Using consumer specific information to customize data stream • Different design, configuration choices become important • E.g., number of threads, dispatch lanes, reply wait strategies

  19. Model Checking/Experiment Configuration C1 100ms Gateway 20 C2 100ms Period 20 100ms S1 10 C3 50ms 10 50ms S2 C4 50ms Relative Deadline Value-added execution (and its cost) • Gateway is theoretically schedulable under RMA • Utilization = 80% • Schedulable utilization = 100% for harmonic periods • Assumption – Messages from 50ms supplier is given higher preference than 100ms supplier • ACE models let us verify scheduling enforcement • IN THE ACTUAL SYSTEM IMPLEMENTATION

  20. Real-time Gateway – Single Thread Gateway ConsumerHandler SupplierHandler ConsumerHandler SupplierHandler Consumer Supplier ConsumerHandler ConsumerHandler • Single reactor thread dispatches incoming events • I/O (reactor) thread same as dispatch thread • I/O thread responsible for value added service Reactor

  21. Real-time Gateway – Dispatch Lanes Gateway ConsumerHandler SupplierHandler ConsumerHandler SupplierHandler Consumer Supplier ConsumerHandler ConsumerHandler • Single reactor thread again dispatches events to gateway handlers • I/O (reactor) thread puts message into dispatch lanes • Lane threads perform value added service, dispatch to consumers • DO QUEUES HELP OR HURT TIMING PREDICTABILITY? Reactor

  22. Model/Actual Traces for Real-time Gateway Execution in the context of lane threads Execution in the context of reactor thread Single threaded Gateway Gateway with dispatch lanes S1,S2 S2 S1,S2 S2 Deadline miss for Consumer4 because of blocking delay at reactor C1 C2 C3 C4 C3 C4 C1 C2 C3 C4 C2 Model C1 C2 C3 C4 Actual C3 C4 C1 C2 C3 C4 C2 Time Time 10 20 30 40 50 60 10 20 30 40 50 60 70 80 90 100 Expected execution timeline with RMS C3, C4 C1, C2, C3, C4 C3 C4 C1 C2 C3 C4 C2 10 20 30 40 50 60 70 80 90 100

  23. Two Essential Technical Objectives • A principled basis for middleware verification • Model each mechanism’s inherent complexity accurately • Remove unnecessary complexity through abstraction • Compose models tractably and with high fidelity to system itself • New protocols and mechanisms for property enforcement • Exploit call graph structure and other domain-specific information • Develop efficient local mechanisms for end-to-end enforcement • Design frameworks to support entire families of related protocols

  24. Properties, Protocols, and Call Graphs Reactor 1 tR1 = 2 α(f4) = 0 f4 f1 α(f1) = 1 α(f3) = 0 • Many real-time systems have static call graphs • even distributed ones • helps feasibility analysis • intuitive to program • Exploit this to design efficient protocols • pre-parse graph and assign static attributes to its nodes • Resource dependence, prioritization • maintain local state about use • enforce properties according to (static) attributes and local state • Guard: α(fi) < tRj • Decrement, increment tRj f3 α(f2) = 0 f2 tR2 = 1 [Subramonian et al., HICSS04] [Sanchez et al., FORTE05, IPDPS06, EMSOFT06, OPODIS06] Reactor 2

  25. Property Enforcement Mechanisms • Protocol enforcement has a common structure • pre-invocation method • invocation up-call • post-invocation method • Specialized strategies implement each protocol • BASIC-P • annotation + variable • k-EFFCIENT-P • annotation + array • LIVE-P • annotation + balanced binary tree • All of these protocols work by delaying upcalls • Constitutes a side effect that model checker should evaluate

  26. Timing Traces for BASIC-P Protocol EH11 EH21 R1 R2 EH31 Flow1 EH12 EH22 R1 R2 EH32 Flow2 Model checking & actual timing traces show BASIC-P protocol’s regulation of threads’ use of resources (no deadlock) EH13 EH23 R1 R2 EH33 Flow3

  27. BASIC-P Blocking Delay Comparison Model Execution Actual Execution Blocking delay for Client2 Blocking delay for Client3

  28. Overhead of ACE TP/DA reactor with BASIC-P Negligible overhead with no DA protocol Overhead increases linearly with # of event handlers due suspend/resume actions on handlers at BASIC-P entry/exit

  29. A Brief Survey of Closely Related Work • Vanderbilt University and UC Irvine • GME, CoSMIC, PICML, Semantic Mapping • UC Irvine • DREAM • UC Santa Cruz • Code Aware Resource Management • UC Berkeley • Ptolemy, E-machine, Giotto • Kansas State University and University of Nebraska • Bogor, Cadena, Kiasan

  30. Concluding Remarks • Timed automata models of middleware building blocks • Are useful to verify middleware concurrency and timing semantics • Domain specific model checking refinements • Help improve fidelity of the models (run to completion, priorities) • Can achieve significant reductions in state space • Property protocols • Reduce what must be checked by provable enforcement • Also benefit from model checking (due to side effects) • Current and Future work • Complete implementation of reactor protocol framework in ACE • Integrate priority inheritance mechanisms with RTCORBA • Evaluate alternatives for mechanism level thread synchronization • Dynamic call graph adaptation, priority protocol enforcement • Extend modeling approach beyond real-time concerns • Model mechanism faults and failure modes • Hybrid automata + domain-aware techniques for constraining complexity

More Related