1 / 12

Memory Hierarchy

Memory Hierarchy. Storage Technology Trends Memory Bandwidth Requirements Hierarchical Structure Locality of Reference Caching Operating Systems Support - Virtual Memory, I/O, FileSystems. Storage Technology Trends. CPU Speeds vs. (Storage) Access Rates

iren
Télécharger la présentation

Memory Hierarchy

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Memory Hierarchy • Storage Technology Trends • Memory Bandwidth Requirements • Hierarchical Structure • Locality of Reference • Caching • Operating Systems Support - Virtual Memory, I/O, FileSystems

  2. Storage Technology Trends • CPU Speeds vs. (Storage) Access Rates • SRAMs have kept up with high growth rates in CPU speed • DRAMs and disks have not • Disk Seek times in particular have grown really slowly • Same is true for initial setup/spooling time for tapes

  3. Memory Bandwidth Requirements

  4. Hierarchy • early days - register set, primary, secondary, archival • multi-level inclusion • present day - register set, L1 cache, L2 cache, RAM, Disk, Networked storage, Web storage, archival

  5. Locality of Reference • Locality Principle: Locus of memory references is small. • Temporal • Spatial • Sequential (Stride-1)

  6. Performance • t-eff = t_h + Pmiss (t_h+1 - t_h) • mem. eff = 100 / (1 + Pmiss (R-1)) • R = t_h+1/t_h • M.e. = 100 when R=1 or Pmiss=0 • Consider R=10 (SRAM), 50(DRAM), 10000 (disk) • What is a good Pmiss for each?

  7. Caching • Caching brings the levels closer • caches are transparent • L1, L2 cache (OS/App*) • Buffer(App), Disk cache(OS/App), network client cache, browser cache(OS/app), proxy cache(OS/app), distribution cache(OS/app/client)

  8. OS Support • Virtual Memory • I/O • File Systems

  9. Virtual Memory • Virtual addressing vs. Physical addressing • Virtual address space determined typically by instruction set, (address) word size • Translation needed • Treat RAM as cache • Demand Paging • (unallocated, cached, uncached) • Mapping (table) maintained by OS in RAM • Split, Lookup, and combine performed by MMU (part of architecture). • Page Fault – page moved from disk to RAM replacing a cached page

  10. Input Output at architectural level • Memory vs. Disk • Direct bus for memory • Disk shares bus with other I/O devices • Recall speed mismatch • Thus Disk access is done via I/O operations • Asynchronous operation (interrupts, DMA)

  11. I/O at OS (unix) level • Programs vs. OS Kernel • Kernel does not impose structure on I/O • I/O is done using “streams of bytes” • Unix processes • Use descriptors (for file, pipe, socket) • I/O devices are treated as files

  12. I/O Devices • Structured (block) or unstructured (character) • I/O operations are handled by kernel-resident software modules • Device drivers • System calls (open, pipe, socket, mknod, ioctl, read, write) • Scatter/Gather I/O

More Related