1 / 81

Module-1 Multicore Architecture

Module-1 Multicore Architecture. SSU. Dr.A.Srinivas PES Institute of Technology Bangalore, India a.srinivas@pes.edu 9 – 20 July 2012. Schedule. Day-1: Module 1: Multicore Architecture Module-2 : Parallel Programming Days 2-5: Parallel Programming with OpenMP

ordonez
Télécharger la présentation

Module-1 Multicore Architecture

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Module-1Multicore Architecture SSU Dr.A.Srinivas PES Institute of Technology Bangalore, India a.srinivas@pes.edu 9 – 20 July 2012

  2. Schedule Day-1: Module 1: Multicore Architecture Module-2 : Parallel Programming Days 2-5: Parallel Programming with OpenMP Assigment : JPEG Compression & Decompression using Parallel Programming OpenMP Directives

  3. Memory Hierarchy of early computers: 3 levels • CPU registers • DRAM Memory • Disk storage

  4. CACHE MEMORY Principle of locality helped to speed up main memory access by introducing small fast memories known as CACHE MEMORIES that hold blocks of the most recently referenced instructions and data items. Cache is a small fast storage device that holds the operands and instructions most likely to be used by the CPU.

  5. Due to increasing gap between CPU and main Memory, small SRAM memory called L1 cache is inserted. L1 caches can be accessed almost as fast as the registers, typically in 1 or 2 clock cycle Due to even more increasing gap between CPU and main memory, Additional cache: L2 cache inserted between L1 cache and main memory : accessed in fewer clock cycles.

  6. L2 cache attached to the memory bus or to its own cache bus • Some high performance systems also include additional L3 cache which sits between L2 and main memory . It has different arrangement but principle is the same. • The cache is placed both physically closer and logically closer to the CPU than the main memory.

  7. CACHE LINES / BLOCKS • Cache memory is subdivided into cache lines • Cache Lines / Blocks: The smallest unit of memory than can be transferred between the main memory and the cache.

  8. Core Vs Processor - A core means, there could be more than one CPU inside; - A Quad core processor of 3 GHz will have four cores in the CPU running at 3 GHz, each with its own Cache..

  9. Amdahl’s Law Speedup = In Terms of No. of Cores: Speedup = Where S is the time spent in executing the serialized portion of the parallelized version And n is the number of cores.

  10. Multicore Philosophy - Two or more cores with in a single Die - each core has its own set of instructions and architectural resources

  11. Hyper Threading: • Parts of a single processor are • shared between threads • - Execution Engine is shared • - OS task switching does not happen in Hyper threading. • Processor is kept as busy as • possible

  12. Branch Target Buffer, Translation Look aside Buffer

More Related