1 / 49

Dependability and real-time

Dependability and real-time. If a system is to produce results within time constraints, it needs to produce results at all! How to justify and measure how well computer systems meet their targets in presence of faults?. Dependable systems. How do things go wrong and why?

sidone
Télécharger la présentation

Dependability and real-time

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Dependability and real-time • If a system is to produce results within time constraints, it needs to produce results at all! • How to justify and measure how well computer systems meet their targets in presence of faults?

  2. Dependable systems • How do things go wrong and why? • What can we do about it? • This lecture: Basic notions of dependability and redundancy in fault-tolerant systems

  3. Early computer systems • 1944: Real-time computer system in the Whirlwind project at MIT, used in a military air traffic control system 1951 • Short life of vaccum tubes gave mean time to failure of 20 minutes

  4. Engineers: Fool me once, shame on you – fool me twice, shame on me

  5. Software developers: Fool me N times, who cares, this is complex and anyway no one expects software to work...

  6. FT - June 16, 2004 • "If you have a problem with your Volkswagen the likelihood that it was a software problem is very high. Software technology is not something that we as car manufacturers feel comfortable with.” Bernd Pischetsrieder, chief executive of Volkswagen

  7. October 2005 • “Automaker Toyota announced a recall of 160,000 of its Prius hybrid vehicles following reports of vehicle warning lights illuminating for no reason, and cars' gasoline engines stalling unexpectedly.” Wired 05-11-08 • The problem was found to be an embedded software bug

  8. Driver support: Volvo cars Source: Volvo Cars

  9. Early space and avionics • During 1955, 18 air carrier accidents in the USA (when only 20% of the public was willing to fly!) • Today’s complexity many times higher

  10. Airbus 380 • Integrated modular avionics (IMA), with safety-critical digital components, e.g. • Power-by-wire: complementing the hydraulic powered flight control surfaces • Cabin pressure control (implemented with a TTP operated bus)

  11. Dependability • How do things go wrong? • Why? • What can we do about it? • Basic notions in dependable systems and replication for fault tolerance (sv. Pålitlighet)

  12. What is dependability? Property of a computing system which allows reliance to be justifiably placed on the service it delivers. [Avizienis et al.] The ability to avoid service failures that are more frequent or more severe than is acceptable. (sv. Pålitliga datorsystem)

  13. Attributes of dependability IFIP WG 10.4 definitions: • Safety: absence of harm to people and environment • Availability: the readiness for correct service • Integrity: absence of improper system alterations • Reliability:continuity of correct service • Maintainability: ability to undergo modifications and repairs

  14. Reliability [Sv. Tillförlitlighet] • Means that the system (functionally) behaves as specified, and does it continually over measured intervals of time • Measured as probability of failure, e.g. 10-9 • Another way of putting it: MTTF • In commercial flight systems - One failure in 109 flight hours

  15. Faults, Errors & Failures • Fault: a defect within the system or a situation that can lead to failure • Error: manifestation (symptom) of the fault - an unexpected behaviour • Failure: system not performing its intended function

  16. Examples • Year 2000 bug • Bit flips in hardware due to cosmic radiation in space • Loose wire • Air craft retracting its landing gear while on ground Effects in time: Permanent/ transient/ intermittent

  17. FaultÞ ErrorÞFailure • Goal of system verification and validation is to eliminate faults • Goal of hazard/risk analysis is to focus on important faults • Goal of fault tolerance is to reduce effects of errors if they appear - eliminate or delayfailures Some will remain…

  18. Dependability techniques Four approaches [IFIP 10.4]: • Fault prevention • Fault removal • Fault tolerance • Fault forecasting Reading: Section 5.1 & 5.3 of article TDDD07: lecture 7 Let’s look at an example!

  19. Google’s 100 min outage September 2, 2009: A small fraction of Gmail’s servers were taken offline to perform routine upgrades. “We had slightly underestimated the load which some recent changes (ironically, some designed to improve service availability) placed on the request routers .” Fault forecasting was applicable!

  20. Dependability techniques Four approaches [IFIP 10.4]: • Fault prevention • Fault removal • Fault tolerance • Fault forecasting

  21. Fault tolerance • Means that a system provides a degraded (but acceptable) function • Even in presence of faults • During a period defined by certain model assumptions • Foreseen or unforeseen? • Fault model describes the foreseen faults

  22. External factors The film...

  23. Fault models • Leading to Node failures • Crash • Omission • Timing • Byzantine • In distributed systems: can also have Channel failures • Crash (and potential partitions) • Message loss • Message delay • Erroneous/arbitrary messages

  24. On-line fault management • Fault/error detection • By program or its environment • Fault containment by architectural choices • Fault tolerance using redundancy • software • hardware • Data

  25. Redundancy From D. Lardner: Edinburgh Review, year 1824: ”The most certain and effectual check upon errors which arise in the process of computation is to cause the same computations to be made by separate and independent computers*; and this check is rendered still more decisive if their computations are carried out by different methods.” * people who compute

  26. Static Redundancy Used all the time (whether an error has appeared or not), just in case… • SW: N-version programming • HW: Voting systems • Data: parity bits, checksums

  27. Dynamic Redundancy Used when error appears and specifically aids the treatment • SW: Rollback recovery, Exceptions • HW: Switching to back-up module • Data:Self-correcting codes • Time: Re-computing a result

  28. X Server replication models • Passive replication • Active replication : Denotes a replica group

  29. Underlying mechanisms • Active replication • Replicas must be deterministic in their computations (same output on same sequence of inputs) • Relies on group membership protocol: who is up, who is down? • Passive replication • Primary – backup: brings the secondary server up to date • After each “write”, or periodically?

  30. For high availability • In both cases we need to • Respect message ordering • Implicit agreement among replicas

  31. Relating states • Needs a notion of time (or event ordering) in distributed systems • TDDC47: Recall nodes in TTP share a global clock • TDDD07: Recall from lecture 4 • Synchronised model • Asynchronous systems: • Abstract time via order of events • Notion of snapshot

  32. The consensus problem Assume that we have a reliable broadcast • Processes p1,…,pn take part in a decision • Each piproposes a value vi • e.g. application state info • All non-faulty processes decide on a common value v that is equal to one of the proposed values

  33. Desired properties • Every non-faulty process eventually decides (Termination) • No two non-faulty processes decide differently (Agreement) • If a process decides v then the value v was proposed by some process (Validity) Algorithms for consensus have to be proven to have these properties In presence of given failure model

  34. Basic impossibility result [Fischer, Lynch and Paterson 1985] There is no deterministic algorithm solving the consensus problem in an asynchronous distributed system with a single crash failure

  35. Synchrony helps... • Computations at each node proceed in rounds initiated by pulses • Pulses implemented using local physical clocks, synchronised assuming bounded message delays

  36. Byzantine agreement Problem: • Nodes need to agree on a decision but some nodes are faulty • Faulty nodes can act in an arbitrary way (can be malicious)

  37. Byzantine agreement protocol • Solved consensus in presence of synchrony • Proposed by Pease, Shostak and Lamport [1980] How many faulty nodes can the algorithm tolerate?

  38. Result from 1980 • Theorem: There is an upper bound t for the number of byzantine node failures compared to the size of the network N, N  3t+1 • Gives a t+1 round algorithm for solving consensus in a synchronous network • Here: • We just demonstrate that t nodes would not be enough!

  39. G G 1 L1 said 1 L2 said 0 1 L1 said 1 0 1 L2 said 0 1 G said 1 L1 L2 L1 L2 G said 0 0 Scenario 1 • G and L1 are correct, L2 is faulty

  40. G G 0 L1 said 1 L2 said 0 1 L1 said 1 0 0 L2 said 0 1 G said 1 L1 L2 L1 L2 G said 0 0 Scenario 2 • G and L2 are correct, L1 is faulty

  41. G G 0 L1 said 1 L2 said 0 1 L1 said 1 0 1 L2 said 0 1 G said 1 L2 L1 L1 L2 G said 0 0 Scenario 3 • The general is faulty!

  42. 2-round algorithm … does not work with t=1, N=3! • Seen from L1, scenario 1 and 3 are identical, so if L1 decides 1 in scenario 1 it will decide 1 in scenario 3 • Similarly for L2, if it decides 0 in scenario 2 it decides 0 in scenario 3 • L1 and L2 do not agree in scenario 3!

  43. Summary • Dependable systems need to justify why services can be relied upon • To tolerate faults we normally deploy replication • Replication needs (some kind of) agreement on state • Agreement problems are hard to solve & prove correct in distributed systems • Require assumptions on faults, and (some kind of) synchrony

  44. Real-time communication • Remember in a TTP bus: • Nodes have synchronised clocks • The communication infrastructure (collection of CCs and the TTP bus communication) guarantees a bounded time for a new data appearing on comm. interface of a node (its CNI) • Byzantine faults in nodes can be detected by other nodes! • Exercise: Which faults are detectable in a system connected with a CAN bus?

  45. Timing and fault tolerance • Implementing fault tolerance often relies on support for timing guarantees • How do fault tolerance mechanisms affect application timeliness?

  46. Timing overhead • Does fault tolerance cost time? • Two examples: • Fault-tolerant scheduling • Replicated server

  47. Fault-tolerant scheduling • Assume we are running RMS • How can we deal with transient faults that lead to a process crash? • We can reschedule the process before its deadline! • Redundancy in time...

  48. To the client Client request Logging Server Application Server Checkpointing request Request replies Highly available servers • Optimal checkpointing interval for achieving highest availability RTSLAB PhD 2005

More Related