1 / 26

Lecture 19: Input/Output (I/O): Buses and Peripherals

Lecture 19: Input/Output (I/O): Buses and Peripherals. Michael B. Greenwald Computer Architecture CIS 501 Fall 1999. Bus Options (See Figure 6.9, page 497). Administration. Sotiris will lecture on Thursday (Chapter 4.)

dougal
Télécharger la présentation

Lecture 19: Input/Output (I/O): Buses and Peripherals

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Lecture 19:Input/Output (I/O):Buses and Peripherals Michael B. Greenwald Computer Architecture CIS 501 Fall 1999

  2. Bus Options(See Figure 6.9, page 497)

  3. Administration • Sotiris will lecture on Thursday (Chapter 4.) • Thoughts on exercise 6.10 (should not involve any changes to your homework!)

  4. Processor Interface IssuesHow does bus interface with/to processor? • Interconnections/Buses • Shared vs. separate Memory/IO buses • Attach to memory, cache, or proc.(separate only) • Processor communication interface • I/O interface vs. Memory mapped I/O • I/O Control Structure • Polling • Interrupts • DMA • I/O Controllers • I/O Processors

  5. How does processor access I/O devices? • Need to read and write control and status registers. • Need to transfer data to/from I/O device

  6. I/O Interface CPU Memory memory bus Independent I/O Bus Seperate I/O instructions (in,out) Interface Interface Peripheral Peripheral CPU Lines distinguish between I/O and memory transfers common memory & I/O bus 40 Mbytes/sec optimistically 10 MIP processor completely saturates the bus! VME bus Multibus-II Nubus Memory Interface Interface Peripheral Peripheral

  7. Memory Mapped I/O CPU Single Memory & I/O Bus No Separate I/O Instructions ROM RAM Memory Interface Interface Peripheral Peripheral CPU $ I/O L2 $ Bus Adaptor snoops memory bus transactions and converts I/O space addresses to I/O operations on I/O bus. (converts I/O ops to memory reads and writes, too). Memory Bus I/O bus Memory Bus Adaptor

  8. I/O Architecture • Hardware covers interconnection point and number of buses. • Software architecture: how I/O is managed by processor(s).

  9. Example: Communications Networks Performance limiter is memory system, OS overhead, not protocols • Send/receive queues in processor memories • Network controller copies back and forth via DMA • No host intervention needed • Interrupt host when message sent or received

  10. I/O Data Flow Impediment to high performance: multiple copies, complex hierarchy Pipeline? Not if shared resource!

  11. Processor Interface IssuesHow does bus interface with/to processor? • Interconnections/Buses • Shared vs. separate Memory/IO buses • Attach to memory, cache, or proc.(separate only) • Processor communication interface • I/O interface vs. Memory mapped I/O • I/O Control Structure • Polling • Interrupts • DMA • I/O Controllers • I/O Processors

  12. Programmed I/O (Polling) CPU Is the data ready? busy wait loop not an efficient way to use the CPU unless the device is very fast & very busy! no Memory IOC yes read data device but checks for I/O completion can be dispersed among computationally intensive code at the cost of increased interrupt latency store data done? no yes

  13. Performance of Polling • Consider a 1Mbps serial line, 32bit word at a time. External device delivers word/32 usecs. • 16 usec to access device, 2 usec to xfer 1 word. • When machine is idle, 16 usecs/32 usecs, so 50% of machine! When busy, 56% of machine Buffer Polling IntervalIdle Bus(bytes) (usecs) (%) (%) 0 32 50.0 56 8 64 25.0 32 16 128 12.5 19 512 4000 0.4 6 2048 16000 0.1 6 Here, latency = 1/2 polling interval

  14. Interrupt Driven Data Transfer CPU add sub and or nop user program (1) I/O interrupt (2) save PC Memory IOC (3) interrupt service addr device read store ... rti interrupt service routine User program progress only halted during actual transfer Overhead to set up xfer: 1000 transfers at 1 ms each: 1000 interrupts @ 2 µsec per interrupt 1000 interrupt service @ 98 µsec each = 0.1 CPU seconds (4) memory Xfer: Device xfer rate = 10 MBytes/sec => 0 .1 x 10 sec/byte => 0.1 µsec/byte => 1000 bytes = 100 µsec 1000 transfers x 100 µsecs = 100 ms = 0.1 CPU seconds -6 Still far from device transfer rate! 1/2 in interrupt overhead

  15. Performance of Interrupts • Consider a 1Mbps serial line, 32bit word at a time. External device delivers word/32 usecs. • 2 usec to deliver interrupt, 98 usec service routine, 2 usec/word. • When machine is idle no overhead. Same buffer trick choose 24 byte buffer to match latency CPU time Words Utilization of serial line

  16. Performance of Interrupts • Consider a 1Mbps serial line, 32bit word at a time. External device delivers word/32 usecs. • 2 usec to deliver interrupt, 98 usec service routine, 2 usec/word. • When machine is idle no overhead. Same buffer trick choose 512 byte buffer to match pkt size

  17. Direct Memory Access Time to do 1000 xfers at 1 msec each: Setup: 1000 DMA set-up sequence @ 50 µsec 1000 interrupt @ 2 µsec 1000 interrupt service sequence @ 48 µsec No Xfer time! . 1 sec of CPU time,.1 sec of device time. CPU sends a starting address, direction, and length count to DMAC. Then issues "start". 0 ROM CPU Can also have DMA on each IOC Memory Mapped I/O RAM Memory DMAC IOC device Peripherals DMAC provides handshake signals for Peripheral Controller, and Memory Addresses and handshake signals for Memory. DMAC n

  18. Input/Output Channels Limited programmability. Fixed task set. CPU downloads program D1 Channel CPU D2 main memory bus . . . Mem Dn I/O bus Like DMA, can be single channel controlling all I/O devices, can be multiple channels each controlling many devices, or 1 channel per device CPU IOP issues instruction to Channel interrupts when done (4) (1) (2) (3) memory Device to/from memory transfers are controlled by the Channel directly. Channel steals memory cycles.

  19. Input/Output Processors Similar approach using FEP (PDP 11 FEP for KL10), or co-processor D1 IOP CPU D2 main memory bus . . . Mem Dn I/O bus target device where cmnds are CPU IOP Selects task set in IOP interrupts when done OP Device Address (4) (1) looks in memory for commands (2) (3) memory OP Addr Cnt Other what to do special requests Can do local processing (byte-swapping, echo negotiation, etc.) IOP steals memory cycles. where to put data how much

  20. Relationship to Processor Architecture • I/O instructions and buses have largely disappeared • Interrupt vectors have been replaced by jump tablesPC <- M [ IVA + interrupt number ]PC <- IVA + interrupt number • Interrupts: • Stack replaced by shadow registers • Handler saves registers and re-enables higher priority int's • Interrupt types reduced in number; handler must query interrupt controller

  21. Relationship to Processor Architecture • Caches required for processor performance cause problems for I/O • Flushing is expensive, I/O pollutes cache • Solution is borrowed from shared memory multiprocessors "snooping”: cache coherency protocols • Virtual memory frustrates DMA: • If > page, physical not contiguous, virtual requires wiring pages down. • Load/store architecture at odds with atomic operations • load locked, store conditional • Stateful processors hard to context switch

  22. Interconnect Trends • Interconnect = glue that interfaces computer system components • High speed hardware interfaces + logical protocols • Networks, channels, backplanes message-based narrow pathways distributed arb memory-mapped wide pathways centralized arb

  23. 1990 Bus Survey (P&H, 1st Ed) VME FutureBusMultibusII IPI SCSI Signals 128 96 96 16 8 Addr/Data mux no yes yes n/a n/a Data width 16 - 32 32 32 16 8 Masters multi multi multi single multi Clocking Async Async Sync Async either MB/s (0ns, word) 25 37 20 25 1.5 (asyn) 5 (sync) 150ns word 12.9 15.5 10 = = 0ns block 27.9 95.2 40 = = 150ns block 13.6 20.8 13.3 = = Max devices 21 20 21 8 7 Max meters 0.5 0.5 0.5 50 25 Standard IEEE 1014 IEEE 896.1 ANSI/IEEE ANSI X3.129 ANSI X3.131 1296

  24. SCSI: Small Computer System Interface • Up to 8 devices to communicate on a bus or “string” at sustained speeds of 4-5 MBytes/sec • SCSI-2 up to 20 MB/sec • Devices can be slave (“target”) or master(“initiator”) • SCSI protocol: a series of ``phases", during which specif-ic actions are taken by the controller and the SCSI disks • Bus Free: No device is currently accessing the bus • Arbitration: When the SCSI bus goes free, multiple devices may request (arbitrate for) the bus; fixed priority by address • Selection: informs the target that it will participate (Reselection if disconnected) • Command: the initiator reads the SCSI command bytes from host memory and sends them to the target • Data Transfer: data in or out, initiator: target • Message Phase: message in or out, initiator: target (identify, save/restore data pointer, disconnect, command complete) • Status Phase: target, just before command complete

  25. SCSI “Bus”: Channel Architecture peer-to-peer protocols initiator/target linear byte streams disconnect/reconnect

  26. 1993 I/O Bus Survey (P&H, 2nd Ed) Bus SBus TurboChannel MicroChannel PCI Originator Sun DEC IBM Intel Clock Rate (MHz) 16-25 12.5-25 async 33 Addressing Virtual Physical Physical Physical Data Sizes (bits) 8,16,32 8,16,24,32 8,16,24,32,64 8,16,24,32,64 Master Multi Single Multi Multi Arbitration Central Central Central Central 32 bit read (MB/s) 33 25 20 33 Peak (MB/s) 89 84 75 111 (222) Max Power (W) 16 26 13 25

More Related