1 / 20

Multicore: Panic or Panacea?

This article discusses the trend of multicore processors in various devices and the challenges they present in terms of power consumption, parallel scaling, and programming for parallelism. It also explores potential solutions and research areas related to multicore processors.

cferrell
Télécharger la présentation

Multicore: Panic or Panacea?

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Multicore: Panic or Panacea? Mikko H. Lipasti Associate Professor Electrical and Computer Engineering University of Wisconsin – Madison http://www.ece.wisc.edu/~pharm

  2. Multicore Mania • First, servers • IBM Power4, 2001 • Then desktops • AMD Athlon X2, 2005 • Then laptops • Intel Core Duo, 2006 • Soon, your cellphone • ARM MPCore, prototypes for a while now Mikko Lipasti-University of Wisconsin

  3. What is behind this trend? • Moore’s Law • Chip power consumption • Single-thread performance trend [source: Intel] Mikko Lipasti-University of Wisconsin

  4. Dynamic Power • Static CMOS: current flows when active • Combinational logic evaluates new inputs • Flip-flop, latch captures new value (clock edge)‏ • Terms • C: capacitance of circuit • wire length, number and size of transistors • V: supply voltage • A: activity factor • f: frequency • Future: Fundamentally power-constrained Mikko Lipasti-University of Wisconsin

  5. Easy answer: Multicore Core Core Core Core Core Core Core Mikko Lipasti-University of Wisconsin

  6. Amdahl’s Law f – fraction that can run in parallel 1-f – fraction that must run serially n f # CPUs 1 f 1-f Time Mikko Lipasti-University of Wisconsin

  7. Fixed Chip Power Budget • Amdahl’s Law • Ignores (power) cost of n cores • Revised Amdahl’s Law • More cores  each core is slower • Parallel speedup < n • Serial portion (1-f) takes longer • Also, interconnect and scaling overhead n # CPUs f 1 1-f Time Mikko Lipasti-University of Wisconsin

  8. Fixed Power Scaling • Fixed power budget forces slow cores • Serial code quickly dominates Mikko Lipasti-University of Wisconsin

  9. Predictions and Challenges • Parallel scaling limits many-core • >4 cores only for well-behaved programs • Optimistic about new applications • Interconnect overhead • Single-thread performance • Will degrade unless we innovate • Parallel programming • Express/extract parallelism in new ways • Retrain programming workforce Mikko Lipasti-University of Wisconsin

  10. Research Agenda • Programming for parallelism • Sources of parallelism • New applications, tools, and approaches • Single-thread performance and power • Most attractive to programmer/user • Chip multiprocessor overheads • Interconnect, caches, coherence, fairness Mikko Lipasti-University of Wisconsin

  11. Finding Parallelism • Functional parallelism • Car: {engine, brakes, entertain, nav, …} • Game: {physics, logic, UI, render, …} • Automatic extraction [UW Multiscalar] • Decompose serial programs • Data parallelism • Vector, matrix, db table, pixels, … • Request parallelism • Web, shared database, telephony, … Mikko Lipasti-University of Wisconsin

  12. Balancing Work • Amdahl’s parallel phase f: all cores busy • If not perfectly balanced • (1-f) term grows (f not fully parallel) • Performance scaling suffers • Manageable for data & request parallel apps • Very difficult problem for other two: • Functional parallelism • Automatically extracted • Scale power to mismatch [Multiscalar] Mikko Lipasti-University of Wisconsin

  13. Coordinating Work • Synchronization • Some data somewhere is shared • Coordinate/order updates and reads • Otherwise  chaos • Traditionally: locks and mutual exclusion • Hard to get right, even harder to tune for perf. • Research: Transactional Memory [UW Multifacet] • Programmer: Declare potential conflict • Hardware and/or software: speculate & check • Commit or roll back and retry Mikko Lipasti-University of Wisconsin

  14. Single-thread Performance • Still most attractive source of performance • Speeds up parallel and serial phases • Can use it to buy back power • Must focus on power consumption • Performance benefit ≥ Power cost Mikko Lipasti-University of Wisconsin

  15. Single-thread Performance • Hardware accelerators and circuits • Domain-specific [UW MESA] • Reconfigurable [UW Compton] • VLSI and design automation [UW WISCAD, Kursun] • Increasing frequency • Seems prohibitive: clock power • Clever clocking schemes can help [UW Pharm] • Increasing instruction-level parallelism [UW Multiscalar, UW Pharm, UW Smith] • Without blowing power budget • Alternatively, reduce power for same performance Mikko Lipasti-University of Wisconsin

  16. Chip Multiprocessor Overheads • Core Interconnect [UW Pharm] • 80% of chip power [Borkar, ISLPED ‘07 panel] • Need fundamentally different approach • Revisit circuit switching • Cache coherence [UW Multifacet, Pharm] • Match workload behavior • Optimize for on-chip communication Mikko Lipasti-University of Wisconsin

  17. Chip Multiprocessor Overheads • Shared caches [UW Multifacet, Multiscalar, Smith] • On-chip memory can be shared • Optimize replacement, replication • Fairness [UW Smith] • Maintain Performance isolation • Share resources fairly (memory, caches) Mikko Lipasti-University of Wisconsin

  18. Research Groups @ UW Mikko Lipasti-University of Wisconsin

  19. Conclusion • Forecast • Limited multicore (≤4) is here to stay • Manycore (>4) will find its place • Hardware Challenges • Single-thread performance and power • Multicore overhead • Software Challenges • Finding application parallelism • Creating correct parallel programs • Creating scalable parallel programs Mikko Lipasti-University of Wisconsin

  20. Questions? http://www.ece.wisc.edu/~pharm Mikko Lipasti-University of Wisconsin

More Related