1 / 26

CPSC 421

CPSC 421. Week 3 Chapter 13 (Elmasri). Disk Storage Basic File Structures Hashing. Terms. Disk Pack Stack of individual disks Track Concentric circles on individual disks Cylinder Tracks with the same diameter on the disks in a disk pack. Single Sided Disk Disk Pack. Sectors.

Télécharger la présentation

CPSC 421

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CPSC 421 Week 3 Chapter 13 (Elmasri)

  2. Disk StorageBasic File StructuresHashing

  3. Terms • Disk Pack • Stack of individual disks • Track • Concentric circles on individual disks • Cylinder • Tracks with the same diameter on the disks in a disk pack

  4. Single Sided DiskDisk Pack

  5. Sectors • Hard-coded division of tracks • Sectors are arcs that subtend a fixed angle • Either • Subtend a fixed angle • Subtend smaller angles as one moves away from the center to maintain a uniform recording density

  6. More Terms • Equal divisions of the tracks, set by the OS • Usually range from 512 to 4096 bytes • Transfer of data between memory and disk takes place in units of blocks

  7. Disk Address of Block • Surface Number • Track Number • Block Number

  8. Read Command • Causes a block of data to be copied into an i/o buffer which is contiguous space in main memory

  9. Write Command • Copies contents of i/o buffer to disk block

  10. Data Transfer Rate = • Seek Time + • Time required to place read/write head over the correct track • Rotational Delay + • Time required to rotate disk to correct block • Block Transfer Time • Time required to transfer data to/from i/o buffer

  11. Relative Speeds Slow because mechanical • Seek Time & Rotational Delay Relatively fast because electronic • Block Transfer Rate

  12. Speed-Up Possibility • Since Seek Time and Rotational Delay are the bottlenecks, one technique is to transfer more than one contiguous block at a time. • But it’s expensive to maintain records on contiguous blocks

  13. Blocking Factor • Number of complete records that will fit on a block r1 r3 r4 r2 Unused

  14. Issues • Some OS allow system administrator to specify the block size • Clearly less space will be wasted if the block size is a multiple of the record size

  15. Double Buffering Speed-Up Without Double Buffering • Read block into buffer • Process block • Read next block

  16. Double Buffering Speed-up With Double Buffering • Disk controller reads block onto Buffer A • Controller signals that the operation is complete • In parallel • CPU processes buffer A • Controller reads block into buffer B

  17. Consequence If time required to fill buffer exceeds the time required to process buffer, the controller can continuously fill the buffer Fill A Fill A Fill B … Process A Process B A is empty

  18. RAID • Processor speed has increased enormously (doubles every 18 mos) • RAM has quadrupled every two or three years • Disk capacities are improving at 50% per year • But access times are improving at the rate of 10% per year • And transfer rates at 20% per year

  19. Trends in disk Technology

  20. Improving Reliability with Raid • The basic idea is to add redundancy • For an array of N disks, the likelihood of failure is N times as much as for one disk • MTTF range up to 1million hours • Suppose our disk has an MTTF of 200,000 hours • If we have a pack of 100 disks, we’ll have a failure every 2000 hours or 83.3 days • So keeping a single copy of data in such a set up is risky

  21. Introducing Redundancy: Mirroring • Write data to two physical disks, treated as one logical disk • When data is read, it can be read from the disk with the shorter queuing, seek, rotational delays • If one disk fails, the other is used until the first is repaired. • If the time to repair a disk is 24 hours, and MTTF is 200,000 hours, it can be shown that the with a hundred disk system, the mean time to data loss is over 95,000 years

  22. More Benefits Disk mirroring doubles the rate at which read requests can be handled, since a read can go to either disk. But (there’s no free lunch): • Additional i/o ops for write • Extra computation to maintain redundancy and to do error recovery • Additional disk capacity

  23. Another RAID Benefit • Data Striping • Break block to be written into small units • Group N physical units into one logical unit • Write each bit to the same address on each of the eight units in parallel • Could do this at block level, even bit level • The larger the N, the better the performance • But, assuming independent failures, the system is 1/Nth as reliable as a single disk.

  24. RAID Developers have come up with a hierarchy of 7 RAID architectures • RAID 0: bit level data striping and no redundant data. Has the best write performance • RAID 1: Data mirroring. Read performance and reliability better than RAID 0 • RAID 2: Includes error detection and correction • RAID 3: Uses a a single parity disk to detect failure • RAID 4: Block level data striping • RAID 5: Block level data striping and parity information distributed across all disks • RAID 6: Protects against up to two disk failures

  25. Most Widely Used • RAID 0 • RAID 1 (used for critical applications) • RAID 5

More Related