1 / 60

I/O Management and Disk Scheduling

Operating Systems. I/O Management and Disk Scheduling. Categories of I/O Devices. Human readable Used to communicate with the user Printers and terminals. Categories of I/O Devices. Machine readable Used to communicate with electronic equipment Disk drives USB keys Sensors Controllers

Télécharger la présentation

I/O Management and Disk Scheduling

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Operating Systems I/O Management and Disk Scheduling

  2. Categories of I/O Devices • Human readable • Used to communicate with the user • Printers and terminals

  3. Categories of I/O Devices • Machine readable • Used to communicate with electronic equipment • Disk drives • USB keys • Sensors • Controllers • Actuators

  4. Categories of I/O Devices • Communication • Used to communicate with remote devices • Digital line drivers • Modems

  5. Differences in I/O Devices • Data rate • May be differences of several orders of magnitude between the data transfer rates • Application • Disk used to store files requires file management software • Disk used to store virtual memory pages needs special hardware and software to support it • Terminal used by system administrator may have a higher priority

  6. Differences in I/O Devices • Complexity of control • Unit of transfer • Data may be transferred as • a stream of bytes for a terminal • in larger blocks for a disk • Data representation • Encoding schemes • Error conditions • Devices respond to errors differently

  7. I/O Device Data Rates

  8. Organization of the I/O Function • Programmed I/O • Process is busy-waiting for the operation to complete • Interrupt-driven I/O • I/O command is issued • Processor continues executing instructions • Direct Memory Access (DMA) • DMA module controls exchange of data between main memory and the I/O device • Processor interrupted only after entire block has been transferred

  9. Relationship Among Techniques

  10. Evolution of the I/O Function • Processor directly controls a peripheral device • Controller or I/O module is added • Processor uses programmed I/O without interrupts • Processor does not need to handle details of external devices • Controller or I/O module with interrupts • Processor does not spend time waiting for an I/O operation to be performed • Direct Memory Access • Blocks of data are moved into memory without involving the processor • Processor involved at beginning and end only

  11. Evolution of the I/O Function • I/O module is a separate processor (I/O channel) • I/O processor • I/O module has its own local memory • Its a computer in its own right

  12. Direct Memory Address • Processor delegates I/O operation to the DMA module • DMA module transfers data directly to, or from, memory • When complete DMA module sends an interrupt signal to the processor

  13. DMA Number of words to be read or written I/O device address sent through data lines Starting memory address for transfer (read or write) sent through data lines to addr register Operation type sent here

  14. DMA Configurations

  15. DMA Configurations

  16. DMA Configurations

  17. Operating System Design Issues • Efficiency • Most I/O devices extremely slow compared to main memory • Use of multiprogramming allows for some processes to be waiting on I/O while another process executes • I/O cannot keep up with processor speed • Swapping is used to bring in additional Ready processes which is an I/O operation • Generality • Desirable to handle all I/O devices in a uniform manner • Hide most of the details of device I/O in lower-level rout

  18. Model of I/O Operation Abstraction, not concerned with details of device control. Management of general I/O functions on behalf of user processes Multilayered (TCP/IP, etc) Ops and data (bufered chars, records, …) → I/O instructions, channel commands, … Queueing & scheduling, control of ops, interrupts, I/O status information; Direct interaction with hardware

  19. Model of I/O Operation symbolic names→identifier that references file directrly or through file descriptor or index table ; User ops affecting directory of files logical file structure, ops that user can specify (open, close, read …); Access rights management. logical references to files and records converted to physical secondary storage addresses; Allocation of space and main storage buffers

  20. I/O Buffering • Reasons for buffering • Processes must wait for I/O to complete before proceeding • Certain pages must remain in main memory during I/O, which interferes with swapping

  21. I/O Buffering • Block-oriented • Information is stored in fixed sized blocks • Transfers are made a block at a time • Used for disks and USB keys • Stream-oriented • Transfer information as a stream of bytes • Used for terminals, printers, communication ports, mouse and other pointing devices, and most other devices that are not secondary storage

  22. Single Buffer • Goal: to smooth out peaks in I/O demand • Operating system assigns a buffer in main memory for an I/O request

  23. Single Buffer • Block-oriented • Input transfers made to buffer • Block moved to user space when needed • Another block is moved into the buffer • Read ahead • User process can process one block of data while next block is read in • Swapping can occur since input is taking place in system memory, not user memory • Operating system keeps track of assignment of system buffers to user processes

  24. Single Buffer • Stream-oriented • Used a line at time • User input from a terminal is one line at a time with carriage return signaling the end of the line • Output to the terminal is one line at a time

  25. Single Buffer

  26. Double Buffer • Use two system buffers instead of one • A process can transfer data to or from one buffer while the operating system empties or fills the other buffer

  27. Circular Buffer • More than two buffers are used • Each individual buffer is one unit in a circular buffer • Used when I/O operation must keep up with process

  28. Disk Performance Parameters • To read or write, the disk head must be positioned at the desired track and at the beginning of the desired sector • Seek time • Time it takes to position the head at the desired track • Rotational delay or rotational latency • Time its takes for the beginning of the sector to reach the head

  29. Timing of Disk I/O Transfer

  30. Disk Performance Parameters • Access time • Sum of seek time and rotational delay • The time it takes to get in position to read or write • Data transfer occurs as the sector moves under the head

  31. Disk Scheduling Policies • Seek time is the reason for differences in performance • For a single disk there will be a number of I/O requests • If requests are selected randomly, get poor performance • In example charts that follow, we assume • Disk head initially located at track 100 • Disk has 200 tracks • Queue has random requests in it • Specifically: 55, 58, 39, 18, 90, 160, 150, 38, 184

  32. Disk Scheduling Policies • First-in, first-out (FIFO) • Process request sequentially • Fair to all processes • Approaches random scheduling in performance if there are many processes

  33. Disk Scheduling Policies • Priority • Goal is not to optimize disk use but to meet other objectives • Short batch jobs may have higher priority • Provide good interactive response time • Last-in, first-out • Good for transaction processing systems • The device is given to the most recent user so there should be little arm movement • Possibility of starvation since a job may never regain the head of the line

  34. Disk Scheduling Policies • Shortest Service Time First • Select the disk I/O request that requires the least movement of the disk arm from its current position • Always choose the minimum seek time

  35. Disk Scheduling Policies • SCAN • Arm moves in one direction only, satisfying all outstanding requests until it reaches the last track in that direction • Direction is reversed

  36. Disk Scheduling Policies • C-SCAN • Restricts scanning to one direction only • When the last track has been visited in one direction, the arm is returned to the opposite end of the disk and the scan begins again

  37. Disk Scheduling Policies • N-step-SCAN • Segments the disk request queue into subqueues of length N • Subqueues are processed one at a time, using SCAN • New requests added to other queue when queue is processed • FSCAN • Two subqueues • One queue is empty for new requests

  38. Disk Scheduling Policies

  39. Disk Scheduling Algorithms

  40. RAID • Redundant Array of Independent Disks • RAID strategy • employ multiple disks drives • Distribute data so that simultaneous access to data on different drives is possible • Goals: • improve I/O performance • Allow easier incremental increases in capacity • Use of multiple devices increases risk of failure • Redundancy used to compensate for this problem

  41. RAID Levels • RAID scheme organized into 7 levels (0 through 6) • Not a hierarchical relationship • Different design architectures sharing three characteristics; • Set of physical disk drives viewed by the operating system as a single logical drive • Data are distributed across the physical drives of an array via striping • Redundant disk capacity is used to store parity information, guaranteeing data recoverability in case of a disk failure(for RAID 2 through 6)

  42. RAID Levels • Two important performance criteria • Data transfer capacity • ability to move data • I/O request rate • ability to satisfy I/O requests • Different levels perform differently relative to the above two metrics • Next table summarizes the characteristics of the seven levels

  43. RAID Levels

  44. RAID Layout • Data distributed across all disks • Advantage: good chance that two different I/O requests are for data blocks on different disks • The two requests could be issued in parallel, reducing I/O queueing time • Striping • all data viewed as being stored on one logical disk • logical disk divided into strips (blocks, sectors, …) • strips are mapped round robin to consecutive disks

  45. RAID 0 (nonredundant) • RAID 0 for high transfer rate • high transfer capacity must exist along the entire path between host memory and the individual disks, including all buses and adapters • application I/O requests drive the disks efficiently • typical request for large amounts of logically contiguous data, compared to the size of a strip

  46. RAID Level 0 • RAID 0 for high I/O request rate • Response time more important than data transfer rate • For individual I/O request for small amount of data, I/O time dominated by • movement of the disk heads (seek time) • movement of the disk (rotational latency) • High execution rates achieved by load balancing across disks of multiple, independent requests • Strip size also influences rate • Large strip leads to I/O request satisfied by single disk

  47. RAID 1 (mirrored) • Positive aspects: • request can be serviced by either disk (one with less access time) • write request requires updating both disks, but this can be done simultaneously • recovery from failure is simple. Use the other drive.

  48. RAID 2 (redundancy through Hamming code) Single bit error correction, double bit error detection Parallel access used: all members participate in execution of every request Typically, spindles are synchronized so each disk head in same relative position Strips typically very small, often single byte or word Typically only used where many disk errors occur Given current highly reliable disks and drive, never used

  49. RAID 3 (bit-interleaved parity) Last disk contains one parity bit for set of individual bits in the same position on all the data disks Redundancy bit calculation: X4(i) = X3(i)X2(i)X1(i)X0(i), where  denotes exclusive-or (add mod 2) Calculation to restore bit 1 for drive 1 failure: X1(i) = X4(i)X3(i)X2(i)X0(i) Small strips  very high data transfer rates, especially for large transfers Only one request at a time  transaction environment performance suffers

  50. RAID 4 (block-level parity) Uses independent access technique: each disk acts independently Separate requests can be processed in parallel independently Better suited for high I/O request rate than for high transfer rate applications Relatively large stips used Bit by bit parity strip calculated across corresponding disk strips and stored in the corresponding parity disk strip independently Involves write penalty for small I/O write requests Every write involves the parity disk, which can become a bottleneck

More Related