1 / 63

System Integration and Performance

This chapter discusses the implementation of the system bus and bus protocol, the interaction between the CPU and peripheral devices, the role of device controllers, interrupts coordination, and the use of buffers, caches, and data compression to improve system performance.

dlord
Télécharger la présentation

System Integration and Performance

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Chapter 6 System Integration and Performance

  2. Chapter goals • Describe the implementation of the system bus and bus protocol. • Describe how the CPU and bus interact with peripheral devices. • Describe the purpose and function of device controllers.

  3. Chapter goals cont. • Describe how interrupts coordinate actions of the CPU with secondary storage and I/O devices • Describe how buffers, caches, and data compression improve computer system performance

  4. Role of the system bus • Bus is mechanism that allows computer components to work together • Is made up of parallel communication lines connecting computer components • CPU, hard drive, parallel port, modem, etc. • Can connect two or more devices • Information can travel in both directions

  5. System bus (cont.) • Can connect both internal (hard drive) and external (printer) devices • System bus has three parts • Data bus – carries data • Address bus – used if RAM is involved • Control bus – commands and status information • Each bus line carries 1 bit of information

  6. System bus

  7. Bus Clock • Like the CPU, the bus has a clock that acts as a timing device • For CPU, each tick is trigger to execute an instruction • For system bus, each tick is an opportunity to transmit data or a control message

  8. Bus clock cont. • Bus clock is MUCH slower than CPU clock • Think of CPU as the highway and the system bus as the local streets

  9. Why slower? • Data on bus must travel a longer physical distance than data in CPU • Even though data is traveling at the speed of light it still needs more time to travel over a greater distance • Need to allows time to factor out noise, interference • Also allows time to operate controller logic in peripheral devices

  10. Bus data transfer rate • Called the bus data capacity • Expresses how much data can travel across bus over time • Is a combination of • Bus clock speed • Data transfer unit (usually a word) • Is used to calculate things like • Time required to load large files (i.e. video)

  11. Bus protocol • Data transportation rules that ensure the smooth transfer of information without error • Dictates the format, content, and timing of data, memory addresses, and messages • Every peripheral device (no matter the manufacturer) must follow the bus protocol rules

  12. Bus protocol cont. • Protocol can impact (reduce) data transfer rates • Protocols often require exchanges of control signals • Control signals consume bus cycles that could otherwise send data

  13. Sample protocol • Example: if a disk drive transfers data to RAM as the result of an explicit CPU instruction, the following steps are followed: • CPU sends command to the drive • Drive send acknowledgement to CPU • Drive carries out transfer • Drive sends confirmation to CPU that transfer is complete

  14. Why use protocols? • Protocol regulates bus access • Stops devices from interfering with each other • I/O data transfer is the largest cause of errors in computers • I/O commands need to be acknowledged and confirmed

  15. What if two devices need the bus? • When two (or more) peripheral devices need access to the bus at the same time that is called a collision • Three solutions are in place to deal with this • Master-slave • Multiple master • Peer to peer

  16. Master-slave • CPU is bus master • Traditional computer architecture • No device can access the bus unless in response to explicit command from CPU • Allows a very simple protocol • No collision is possible as long as CPU waits for response from device before proceeding to the next bus request

  17. Master-slave cont. • Overall system performance is severely degraded • If devices can only communicate through the CPU, then transfers between devices, i.e. memory to disk, must pass through the CPU • Every transfer takes at least 2 bus cycles • CPU cannot execute software while it is managing the bus

  18. A better solution • System performance is improved if storage and I/O devices can transmit data among themselves without explicit CPU involvement • Direct Memory Access (DMA) controller is attached to the bus and main memory • DMA assumes the role of bus master for all transfers between memory and other storage or I/O devices • CPU is free to do whatever

  19. Multiple master bus • Any device can assume control of the bus, or act as bus master for transfer to any other device (not just memory) • Still only a single device can be master at one time • Bus arbitration unit is a simple processor attached to a multiple master bus • It decides which devices must wait when multiple devices want to become a bus master

  20. Logical vs. Physical Access • I/O port is a communication pathway from the CPU to a peripheral device • I/O port is often implemented as a memory address that can be accessed (read or written to) by • The CPU • Or a single peripheral device

  21. Logical and Physical Access

  22. I/O Ports • Each peripheral device may have several I/O ports and use them for different purposes • Dedicated bus hardware controls data movement between I/O ports and peripheral devices • CPU reads and writes to I/O ports using ordinary data movement instructions or dedicated I/O instructions

  23. The CPU and I/O Ports • I/O port is more than a memory address, it is a data conduit • It is a logical abstraction used by the CPU and the bus to interact with each peripheral device in a similar way

  24. Logical access • CPU and the bus both interact with each peripheral device as if it was a storage device containing one or more bytes of contiguous memory • CPU and the bus deals with each device the same way, but devices are different • Storage capacity • Internal data coding methods • If storage or I/O device

  25. Linear address space • A read/write operation to/from this hypothetical device is called a logical access • The set of sequentially numbered storage locations is called a linear address space

  26. How logical becomes physical • Logical access assumes device is similar to memory (RAM) • Bus address lines carry the position within the linear address space being read or written • Device controller makes the conversion via a conversion table or a simple algorithm

  27. Conversion table for disk

  28. Device controllers • Storage devices have intermediaries that connect them to the system bus • Translate logical access to physical access • Handles bus protocol (receiving and acknowledging commands) • Permits several devices to share a bus connection

  29. Device controllers

  30. Device controllers cont. • Device controllers monitor the bus control lines for signals to peripheral devices • Translates those signals into appropriate commands for its device

  31. Interrupts • Secondary storage and I/O device transfer rates are much slower than the CPU • Why? • Slower bus clock • Peripheral devices have mechanical elements (access arm, spin mechanism) that are slower than speed of electricity

  32. Interrupts cont. • When the CPU issues a read/write instruction it ALWAYS has to wait • This waiting time can translate into thousands, millions, or even billions of CPU cycles • To allow CPU to be used more efficiently, interrupts are used

  33. How interrupts work • When a program (task, process, thread) needs I/O, CPU makes I/O request over the system bus • Then puts your task aside (asleep) • Does something else for the time being

  34. Interrupts cont. • When I/O is complete, interrupt signal is sent to the CPU • CPU can now restart your task with I/O task being complete

  35. CPU and Interrupts • Portion of the CPU (separate from the fetch execute cycle) continuously monitors the bus for interrupt signals • The signal is an interrupt code that indicates the bus port number of the device sending the interrupt • CPU copies any interrupt signals it encounters into an interrupt register

  36. The CPU and Interrupts cont. • As an extra step in the fetch execute cycle, the CPU checks the interrupt register after completing an instruction but before fetching another one • If interrupt register has a non-zero value CPU must respond to the interrupt

  37. CPU and Interrupts • If CPU is to process an interrupt it does the following: • Puts aside (suspends) current task • Resets interrupt register to 0 (zero) • Processes interrupt by calling interrupt handler • After interrupt processing is complete, resumes suspended program

  38. Interrupt handlers • Interrupts are a mechanism for calling (invoking) system software processes and programs • Operating system (OS) provides low-level processing routines (service calls) • Examples: reading data in from the keyboard • Writing to a file

  39. Interrupt handlers cont. • There is a unique individual interrupt handler (i.e. program) to process each possible interrupt • Each handler is a separate program stored in a separate part of main memory

  40. Interrupt table • A conversion table in main memory that has a list of all interrupt codes • Interrupt code is used as an index into interrupt table • For each interrupt code, interrupt table has the memory address of each interrupt handler

  41. Interrupt handlers • Supervisor (OS) examines the interrupt code, uses it as an index into the interrupt table • Looks up memory location of needed interrupt handler • Loads that memory location into the PC (program counter) • Interrupt handler begins executing

  42. Multiple interrupts • It is possible (even likely) that interrupts will interrupt each other • OS has an algorithm to determine what goes first • Assigns priorities to different interrupts based on • Error conditions • Critical hardware failures

  43. Suspending a process • Whenever a process is suspended or interrupted the system must save whatever information is necessary to allow the process to restart again • Typically that involved saving • PC and IR • Any other specialized or general purpose registers that were in use

  44. Saving a process • The collection of information needed to restart a process is called the “machine state” • It is saved in a special storage location called the stack

  45. The Stack • The stack is a specialize storage location in RAM • It is a data structure where you add and delete information from the same end • Therefore the last process saved by the CPU is the first one it will pick up

  46. Interrupt process

  47. Buffers • Buffers are a mechanism that uses RAM to overcome slow data transfer rate to peripheral devices • Small storage area (in RAM) used to hold data in transit from one device to another

  48. Buffers and printing • Printed version of document with formatting information is copied to RAM • When full page is ready it is released from the buffer • Document is written from RAM to printer • Also have input buffers – keyboard, modem, etc.

  49. Buffers

  50. Cache • Pronounced “cash” • Separate high speed storage area specifically managed to improve overall system performance • Idea is most often needed data is kept in the cache • Must be managed intelligently

More Related