1 / 31

Ensieea Rizwani

Ensieea Rizwani. Disk Failures in the real world: What does an MTTF of 1,000,000 hours mean to you? Bianca Schroeder Garth A. Gibson Carnegie Mellon University. Motivation. Storage failure can not only cause temporary data unavailability, but permanent data loss.

takara
Télécharger la présentation

Ensieea Rizwani

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. EnsieeaRizwani Disk Failures in the real world: What does an MTTF of 1,000,000 hours mean to you? Bianca Schroeder Garth A. Gibson Carnegie Mellon University

  2. Motivation • Storage failure can not only cause temporary data unavailability, but permanent data loss. • Technology trends and market forces may make storage system failures occur more frequently in the future.

  3. What is Disk Failure? • Assumption: Disk failures follow a simple “fail-stop model”, where disk either work perfectly or fail absolutely and in an easily detectable manner. • Disk failures are much more complex in reality. • Why ???

  4. Complexities of Disk failure • Often it is hard to correctly attribute the rootcause of a problem to a particular hardware component. • Example: • If a drive experiences a latent sector faults or transient performance problem, it is often hard to pin point the root cause.

  5. Complexities of Disk failure • There is not a unique definition of when a drive is faulty. • Example: • Customers and vendors might use different definitions. • A common way for a customer to test a drive is to read all of its sectors to see if any reads experience problems, and decide that it is faulty if any one operation takes longer than a certain threshold. • Many sites follow a “better safe than sorry” mentality, and use even more rigorous testing.

  6. Challenge! Unfortunately, many aspects of disk failures in real systems are not well understood, probably because the owners of such systems are reluctant to release failure data or do not gather such data.

  7. Research data for this paper Request to a number of large production sites and were able to convince several of them to provide failure data from some of their systems • Dataset in the article provides an analysis of seven data sets collected • high-performance computing sites • large internet services sites.

  8. Data Sets • Consist of primarily hardware replacement logs • Vary in duration from one month to five years • Cover a population of more than 100,000 drives from at least four different vendors • Include drives with SCSI, FC and SATA interfaces.

  9. Research data sets • Research is based on hardware replacement records and logs. • Article analyzes records from number of large production systems, which contain a record count and conditions for every disk that was replaced in the system during the data collection.

  10. Data set Analysis After a disk drive is identified as the likely culprit in a problem, the operations staff perform a series of tests on the drive to assess its behavior. If the behavior qualifies as faulty according to the customer’s definition, the disk is replaced and a corresponding entry is made in the hardware replacement log.

  11. Failure Depending Factors • Operating Conditions Environmental factors • Temperature • Humidity Data Handling procedures • Work loads • Duty cycles or powered-on hours patterns

  12. Effect of Bad Batch • Failure behavior of disk drive may differ even if they are of the same model • Changes in manufacturing process and parts could play huge role • Drive’s hardware or firmware component • Assembly line on which a drive was manufactured. • A bad batch can lead to unusually high drive failure rates or high media error rates.

  13. Bad Batch Example • HPC3, one of the data set customers had 11,000 SATA drives replaced in Oct. 2006 • high frequency of media errors during writes. • It took a year to resolve. • The customer and vendor agreed that these drives did not meet warranty conditions. • The cause was attributed to the breakdown of a lubricant during manufacturing leading to unacceptably high head flying heights.

  14. Bad Batch • Effect of batches is not analyzed. • Research report is on the field experience, in terms of disk replacement rates, of a set of drive customers. • Customers usually do not have the information necessary to determine which of the drives they are using come from the same or different batches. Since the data spans a large number of drives (more than 100,000) and comes from a diverse set of customers and systems.

  15. Research Failure Data Sets

  16. Reliability Metrics • Annualized Failure rate (AFR) scaled to yearly estimation and the Mean time to failure (MTTF) • Annual Replacement Rate (ARR) • MTTF = Power on Hours / AFR • Fact: MTTFs specified for today’s high-test quality disks range from 1,000,000 hours to 1,500,000

  17. Hazard rate h(t) • Hazard rate between Disk replacement • h(t) = f(t) / 1 – F(t) • (t) denotes time between failures, • h(t) describes instantaneous failure rate since the most recently observed failure. • F(t) cumulative distribution function

  18. Hazard rate Analysis • constant hazard rate implies that the probability of failure at a given point in time does not depend on how long it has been since the most recent failure. • Increasing hazard rate means that the probability of a failure increases, if the time since the last failure has been long. • Decreasing hazard rate means that the probability of a failure decreases, if the time since the last failure has been long.

  19. Comparing Disk replacement frequency with that of other hardware components • The reliability of a system depends on all its components not just the hard drives. • What is the frequency of hard drive failures to other hardware failures?

  20. Hardware Failure Comparison While the above table suggests that disks are among the most commonly replaced hardware components, it does not necessarily imply that disks are less reliable or have a shorter lifespan than other hardware components.

  21. Responsible Hardware Node outages that were attributed to hardware problems broken down by the responsible hardware component. This includes all outages, not only those that required replacement of a hardware component.

  22. It is interesting to observe that for these data sets there is no significant discrepancy between replacement rates for SCSI and FC drives, commonly represented as the most reliable types of disk drives, and SATA drives, frequently described as lower quality. Note HPC4 exclusively SATA drives.

  23. Why are the observed field ARR so much higher than datasheet MTTF? • Field AFR are more than a factor of two higher than the datasheet AFR. • Customer and vendor definition of faulty varies • MTTF are determined based on accelerated stress tests, which make certain assumptions about the operating conditions.

  24. Age dependent replacement rate Failure rates of hardware products typically follow a “bathtub curve” with high failure rates at the beginning (infant mortality) and the end (wear-out) of the lifecycle.

  25. MTTF & AFR • ARR larger than suggested MTTF in all years except first year. • Increasing replacement rate suggesting early wear out. • Disagree with “bottom of bath-tub” analogy.

  26. Distribution of Time Distribution of time between disk replacements across all nodes in HPC1.

  27. Many have pointed out the need for a better understanding of what disk failures look like in the field. Yet hardly any published work exists that provides a large-scale study of disk failures in production systems.

  28. Conclusion • Field usage appears to differ from datasheet MTTF conditions. • For drives less than five years old, ARR was much larger than what the datasheet MTTF suggested by a factor of 2–10. This rate often expected to be in steady state (bottom of the “bathtub curve”). • In the data sets, the replacement rates of SATA disks are not worse than the replacement rates of SCSI or FC disks. This may indicate that disk independent factors, such as operating conditions, usage and environmental factors, affect replacement rates more than component specific factors.

  29. Conclusion • Concern that MTTF under represents infant mortality but early on-set of wear out is more important than under representation of infant mortality. • The empirical distribution of time between disk replacements are best fit by a Weibull distribution and gamma distributions and not exponential distributions.

  30. Thank You !

  31. Citation : Disk failures in the real world: What does an MTTF of 1,000,000 hours mean to you? Bianca Schroeder http://www.cs.cmu.edu/~bianca/fast07.pdf

More Related