580 likes | 707 Vues
I/O Management and Disk Scheduling. Chapter 11. Categories of I/O Devices. Human readable Machine readable Communication. Categories of I/O Devices. Human readable Used to communicate with the user Printers Video display terminals Display Keyboard Mouse. Categories of I/O Devices.
E N D
I/O Management and Disk Scheduling Chapter 11
Categories of I/O Devices • Human readable • Machine readable • Communication
Categories of I/O Devices • Human readable • Used to communicate with the user • Printers • Video display terminals • Display • Keyboard • Mouse
Categories of I/O Devices • Machine readable • Used to communicate with electronic equipment • Disk and tap drives • Sensors • Controllers • Actuators
Categories of I/O Devices • Communication • Used to communicate with remote devices • Digital line drivers • Modems
Differences in I/O Devices • Data rate • Application • Complexity of control • Unit of transfer • Data representation • Error conditions
Techniques for Performing I/O • Programmed I/O • Process is busy-waiting for the operation to complete • Interrupt-driven I/O • I/O command is issued • Processor continues executing instructions • I/O module sends an interrupt when done • Direct Memory Access (DMA) • DMA module controls exchange of data between main memory and the I/O device • Processor interrupted only after entire block has been transferred
Evolution of the I/O Function • Processor directly controls a peripheral device • Controller or I/O module is added • Processor uses programmed I/O without interrupts • Processor does not need to handle details of external devices
Evolution of the I/O Function • Controller or I/O module with interrupts • Processor does not spend time waiting for an I/O operation to be performed • Direct Memory Access • Blocks of data are moved into memory without involving the processor • Processor involved at beginning and end only
Evolution of the I/O Function • I/O module is a separate processor • I/O processor - a computer in its own right • I/O module has its own local memory • Its a computer in its own right
Direct Memory Access • Takes control of the system form the CPU to transfer data to and from memory over the system bus • Cycle stealing is used to transfer data on the system bus • The instruction cycle is suspended so data can be transferred • The CPU pauses one bus cycle • No interrupts occur • Do not save context
DMA • Problems and solutions • Cycle stealing causes the CPU to execute more slowly • Number of required busy cycles can be cut by integrating the DMA and I/O functions • Path between DMA module and I/O module that does not include the system bus
Operating System Design Issues • Efficiency • Most I/O devices extremely slow • Use of multiprogramming allows for some processes to be waiting on I/O while another process executes • Generality • Desirable to handle all I/O devices in a uniform manner • Hide most of the details of device I/O in lower-level routines so that processes and upper levels see devices in general terms such as read, write, open, close, lock, unlock
Logical structure • Hierarchical philosophy • the functions of the OS should be organized in layers, according to their • complexity • characteristic time scale • level of abstraction
Local peripheral device layers • Logical I/O: deals with the device as a logical resource, not concerned with the details of actually controlling the device. • Device I/O: requested operations and data are converted into appropriate sequences of I/O instructions, channel commands, and controller orders. • Scheduling and controls:the actual queuing and scheduling of I/O operations occurs at this layer, as well as the control of the operations.
I/O Buffering • Reasons for buffering • Processes must wait for I/O to complete before proceeding • Certain pages must remain in main memory during I/O
I/O Buffering • Block-oriented • Information is stored in fixed sized blocks • Transfers are made a block at a time • Used for disks and tapes • Stream-oriented • Transfer information as a stream of bytes • Used for terminals, printers, communication ports, mouse, and most other devices that are not secondary storage
Single Buffer • Operating system assigns a buffer in main memory for an I/O request • Block-oriented • Input transfers made to buffer • Block moved to user space when needed • Another block is moved into the buffer • Read ahead
Single Buffer • Block-oriented • User process can process one block of data while next block is read in • Swapping can occur since input is taking place in system memory, not user memory • Operating system keeps track of assignment of system buffers to user processes
Single Buffer • Stream-oriented • Used a line at time • User input from a terminal is one line at a time with carriage return signaling the end of the line • Output to the terminal is one line at a time
Double Buffer • Use two system buffers instead of one • A process can transfer data to or from one buffer while the operating system empties or fills the other buffer
Circular Buffer • More than two buffers are used • Each individual buffer is one unit in a circular buffer • Used when I/O operation must keep up with process
Utility of Buffering • Buffering smooths out peaks in I/O demand • can increase the efficiency of the operating system and the performance of individual processes.
Disk Scheduling • To read or write, the disk head must be positioned at the desired track and at the beginning of the desired sector • Aim of Scheduling: • minimize the access time: • seek time • rotational delay
Disk Performance Parameters • Seek time • time it takes to position the head at the desired track • Rotational delay or rotational latency • time its takes for the beginning of the sector to reach the head
Disk Performance Parameters • Access time • Sum of seek time and rotational delay • The time it takes to get in position to read or write • Data transfer occurs as the sector moves under the head
Disk Scheduling Policies • Random scheduling • the worst possible performance • First-in, first-out (FIFO) • Priority • Last-in, first-out • Shortest Service Time First • SCAN • C-SCAN
Disk Scheduling Policies • First-in, first-out (FIFO) • Process request sequentially • Fair to all processes • Approaches random scheduling in performance if there are many processes
Disk Scheduling Policies • Priority • Goal is not to optimize disk use but to meet other objectives • Short batch jobs may have higher priority • Provide good interactive response time
Disk Scheduling Policies • Last-in, first-out • Good for transaction processing systems • The device is given to the most recent user so there should be little arm movement • Possibility of starvation since a job may never regain the head of the line
Disk Scheduling Policies • Shortest Service Time First • Select the disk I/O request that requires the least movement of the disk arm from its current position • Always choose the minimum Seek time
Disk Scheduling Policies • SCAN • Arm moves in one direction only, satisfying all outstanding requests until it reaches the last track in that direction • Direction is reversed
Disk Scheduling Policies • C-SCAN • Restricts scanning to one direction only • When the last track has been visited in one direction, the arm is returned to the opposite end of the disk and the scan begins again
Disk Scheduling Policies • N-step-SCAN • Segments the disk request queue into subqueues of length N • Subqueues are process one at a time, using SCAN • New requests added to other queue when queue is processed • FSCAN • Two queues • One queue is empty for new request
RAIDRedundant Array of Independent Disks • RAID is a set of physical disk drives viewed by the operating system as a single logical drive. • Data are distributed across the physical drives of an array. • Redundant disk capacity is used to store parity information, which guarantees data recoverability in case of a disk failure.
RAID • Enables simultaneous access to data from multiple drives, thereby improving I/O performance and allowing easier incremental increases in capacity. • Increased the probability of failure. RAID makes use of stored parity information that enables the recovery of data lost due to a disk failure.
RAID • RAID 0:the user and system data are distributed across all of the disks in the array. Parallel processing, non redundant. • RAID 1:redundancy is achieved by mirroring the disks • RAID 2 and 3: make use of a parallel access technique, all member disks participate in the execution of every I/O request.
RAID • RAID 2: redundancy through Humming code, not implemented • RAID 3: a single redundant disk, bit-interleaved parity • RAID 4: block-level parity • RAID 5: block-level distributed parity • RAID 6: dual redundancy