1 / 46

Samira Khan University of Virginia Sep 2, 2019

This course explores fundamental concepts and computing models in data-centric system design. Topics include computer architecture, abstraction layers, processing-in-memory technologies, and more.

cliffordr
Télécharger la présentation

Samira Khan University of Virginia Sep 2, 2019

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Data-Centric System Design CS 6501 Fundamental Concepts: Computing Models Samira Khan University of Virginia Sep 2, 2019 The content and concept of this course are adapted from CMU ECE 740

  2. AGENDA • Review from last lecture • Why study computer architecture? • Fundamental concepts • Computing models

  3. LAST LECTURE RECAP • What it means/takes to be a good (computer) architect • Roles of a computer architect (look everywhere!) • Levels of transformation • Abstraction layers, their benefits, and the benefits of comfortably crossing them • An example problem and solution ideas • Designing a system with processing-in-memory technologies • Course Logistics • Assignments: HW (today), Review Set 1 (Next Wednesday)

  4. REVIEW: KEY TAKEAWAY • Breaking the abstraction layers (between components and transformation hierarchy levels) and knowing what is underneathenables you to solve problems and design better future systems • Cooperation between multiple components and layers can enable more effective solutions and systems

  5. HOW TO DO THE PAPER REVIEWS • 1: Brief summary • What is the problem the paper is trying to solve? • What are the key ideas of the paper? Key insights? • What is the key contribution to literature at the time it was written? • What are the most important things you take out from it? • 2: Strengths (most important ones) • Does the paper solve the problem well? • 3: Weaknesses (most important ones) • This is where you should think critically. Every paper/idea has a weakness. This does not mean the paper is necessarily bad. It means there is room for improvement and future research can accomplish this. • 4: Can you do (much) better? Present your thoughts/ideas. • 5: What have you learned/enjoyed/disliked in the paper? Why? • Review should be short and concise (~half a page to a page)

  6. AGENDA • Review from last lecture • Why study computer architecture? • Fundamental concepts • Computing models

  7. AN ENABLER: MOORE’S LAW Moore, “Cramming more components onto integrated circuits,” Electronics Magazine, 1965. Component counts double every other year Image source: Intel

  8. Number of transistors on an integrated circuit doubles ~ every two years Image source: Wikipedia

  9. RECOMMENDED READING • Moore, “Cramming more components onto integrated circuits,”Electronics Magazine, 1965. • Only 3 pages • A quote: “With unit cost falling as the number of components per circuit rises, by 1975 economics may dictate squeezing as many as 65 000 components on a single silicon chip.” • Another quote: “Will it be possible to remove the heat generated by tens of thousands of components in a single silicon chip?”

  10. WHAT DO WE USE THESE TRANSISTORS FOR?

  11. WHY STUDY COMPUTER ARCHITECTURE? • Enable better systems: make computers faster, cheaper, smaller, more reliable, … • By exploiting advances and changes in underlying technology/circuits • Enable new applications • Life-like 3D visualization 20 years ago? • Virtual reality? • Personalized genomics? Personalized medicine? • Enable better solutions to problems • Software innovation is built into trends and changes in computer architecture • > 50% performance improvement per year has enabled this innovation • Understand why computers work the way they do

  12. COMPUTER ARCHITECTURE TODAY (I) • Today is a very exciting time to study computer architecture • Industry is in a large paradigm shift (to multi-core and beyond: accelerators, FPGAs, processing-in-memory) – many different potential system designs possible • Many difficult problems motivating and caused by the shift • Power/energy constraints  multi-core? • Complexity of design  multi-core? • Difficulties in technology scaling  new technologies? • Memory wall/gap • Reliability wall/issues • Programmability wall/problem • Huge hunger for data and new data-intensive applications • No clear, definitive answers to these problems

  13. COMPUTER ARCHITECTURE TODAY (II) • These problems affect all parts of the computing stack – if we do not change the way we design systems • No clear, definitive answers to these problems Problem Many new demands from the top (Look Up) Algorithm Fast changing demands and personalities of users (Look Up) User Program/Language Runtime System (VM, OS, MM) ISA Microarchitecture Logic Many new issues at the bottom (Look Down) Circuits Electrons

  14. COMPUTER ARCHITECTURE TODAY (III) • Computing landscape is very different from 10-20 years ago • Both UP (software and humanity trends) and DOWN (technologies and their issues), FORWARD and BACKWARD, and the resulting requirements and constraints Hybrid Main Memory Persistent Memory/Storage Microsoft Catapult (FPGA) Heterogeneous Processors General Purpose GPUs Every component and its interfaces, as well as entire system designs are being re-examined

  15. COMPUTER ARCHITECTURE TODAY (IV) • You can revolutionize the way computers are built, if you understand both the hardware and the software (and change each accordingly) • You can invent new paradigms for computation, communication, and storage • Recommended book: Thomas Kuhn, “The Structure of Scientific Revolutions” (1962) • Pre-paradigm science: no clear consensus in the field • Normal science: dominant theory used to explain/improve things (business as usual); exceptions considered anomalies • Revolutionary science: underlying assumptions re-examined

  16. Thomas S Kuhn • PhD in Physics from Harvard in 1949 • During his PhD switched from physics to the History and Philosophy of Science • Joined University of California Berkeley as a professor of the History of Science in 1961 • Wrote the book “Structure of the Scientific Revolutions” in 1962

  17. COMPUTER ARCHITECTURE TODAY (IV) • You can revolutionize the way computers are built, if you understand both the hardware and the software (and change each accordingly) • You can invent new paradigms for computation, communication, and storage • Recommended book: Thomas Kuhn, “The Structure of Scientific Revolutions” (1962) • Pre-paradigm science: no clear consensus in the field • Normal science: dominant theory used to explain/improve things (business as usual); exceptions considered anomalies • Revolutionary science: underlying assumptions re-examined

  18. THE STRUCTURE OF SCIENTIFIC REVOLUTIONS Pre-paradigm Normal Science Anomaly Crisis and Emergence of Scientific Theory Scientific Revolution 0 1 2 3 4 History of Science

  19. THE STRUCTURE OF SCIENTIFIC REVOLUTIONS Pre-paradigm Normal Science Anomaly Crisis and Emergence of Scientific Theory Scientific Revolution 0 1 2 3 4 History of Science

  20. THE STRUCTURE OF SCIENTIFIC REVOLUTIONS Pre-paradigm Normal Science Anomaly Crisis and Emergence of Scientific Theory Scientific Revolution 0 1 2 3 4 History of Science

  21. THE STRUCTURE OF SCIENTIFIC REVOLUTIONS Pre-paradigm Normal Science Anomaly Crisis and Emergence of Scientific Theory Scientific Revolution 0 1 2 3 4 History of Science

  22. COMPUTER ARCHITECTURE TODAY (IV) • Thomas Kuhn, “The Structure of Scientific Revolutions” (1962)

  23. … BUT, FIRST … • Let’s understand the fundamentals… • You can change the world only if you understand it well enough… • Especially the past and present dominant paradigms • And, their advantages and shortcomings – tradeoffs • And, what remains fundamental across generations • And, what techniques you can use and develop to solve problems

  24. AGENDA • Review from last lecture • Why study computer architecture? • Fundamental concepts • Computing models

  25. WHAT IS A COMPUTER? • Three key components • Computation • Communication • Storage (memory)

  26. WHAT IS A COMPUTER? Processing Memory (program and data) I/O control (sequencing) datapath

  27. THE VON NEUMANN MODEL/ARCHITECTURE • Also called stored program computer (instructions in memory). Two key properties: • Stored program • Instructions stored in a linear memory array • Memory is unified between instructions and data • The interpretation of a stored value depends on the control signals • Sequential instruction processing • One instruction processed (fetched, executed, and completed) at a time • Program counter (instruction pointer) identifies the current instr. • Program counter is advanced sequentially except for control transfer instructions When is a value interpreted as an instruction?

  28. THE VON NEUMANN MODEL/ARCHITECTURE • Recommended reading • Burks, Goldstein, Von Neumann, “Preliminary discussion of the logical design of an electronic computing instrument,”1946. • Stored program • Sequential instruction processing

  29. THE VON NEUMANN MODEL (OF A COMPUTER) MEMORY Mem Addr Reg Mem Data Reg PROCESSING UNIT INPUT OUTPUT TEMP ALU CONTROL UNIT IP Inst Register

  30. THE VON NEUMANN MODEL (OF A COMPUTER) • Q: Is this the only way that a computer can operate? • A: No. • Qualified Answer: But, it has been the dominant way • i.e., the dominant paradigm for computing • for N decades

  31. THE DATA FLOW MODEL (OF A COMPUTER) • Von Neumann model: An instruction is fetched and executed in control flow order • As specified by the instruction pointer • Sequential unless explicit control flow instruction • Dataflow model: An instruction is fetched and executed in data flow order • i.e., when its operands are ready • i.e., there is no instruction pointer • Instruction ordering specified by data flow dependence • Each instruction specifies “who” should receive the result • An instruction can “fire” whenever all operands are received • Potentially many instructions can execute at the same time • Inherently more parallel

  32. VON NEUMANN VS DATAFLOW • Consider a Von Neumann program • What is the significance of the program order? • What is the significance of the storage locations? • Which model is more natural to you as a programmer? a b v <= a + b; w <= b * 2; x <= v - w y <= v + w z <= x * y + *2 - + Sequential * Dataflow z

  33. MORE ON DATA FLOW • In a data flow machine, a program consists of data flow nodes • A data flow node fires (fetched and executed) when all it inputs are ready • i.e. when all inputs have tokens • Data flow node and its ISA representation

  34. DATA FLOW NODES

  35. An Example

  36. What does this model perform? val = a ^ b

  37. What does this model perform? val = a ^ b val =! 0

  38. What does this model perform? val = a ^ b val =! 0 val &= val - 1

  39. What does this model perform? val = a ^ b val =! 0 val &= val - 1; dist = 0 dist++;

  40. Hamming Distance inthamming_distance (unsigned a, unsigned b) { intdist=0; unsigned val= a ^ b; // Count the number of bits set while (val!=0) { // A bit is set, so increment the count and clear the bit dist++; val&=val-1; } // Return the number of differing bits returndist; }

  41. Hamming Distance •  Number of positions at which the corresponding symbols are different. • The Hamming distance between: • "karolin" and "kathrin" is 3 • 1011101 and 1001001 is 2 • 2173896 and 2233796 is 3

  42. RICHARD HAMMING • Best known for Hamming Code • Won Turing Award in 1968 • Was part of the Manhattan Project • Worked in Bell Labs for 30 years • You and Your Research is mainly his advice to other researchers • Had given the talk many times during his life time • http://www.cs.virginia.edu/~robins/YouAndYourResearch.html

  43. b a 1 + *7 1 2 2 x 3 y 4 - + 4 3 5 * 5 Dataflow Machine:Instruction Templates Destination 2 Destination 1 Operand 1 Operand 2 Opcode + 3L 4L *3R 4R - 5L + 5R *out Presence bits Each arc in the graph has an operand slot in the program • Dennis and Misunas, “A Preliminary Architecture for a Basic Data Flow Processor,” ISCA 1974 • One of the papers assigned for review next week

  44. Static Dataflow Machine (Dennis+, ISCA 1974) Receive • Many such processors can be connected together • Programs can be statically divided among the processors Instruction Templates Op dest1 dest2 p1 src1 p2 src2 1 2 . . . FU FU FU FU FU Send <s1, p1, v1>, <s2, p2, v2>

  45. Static Data Flow Machines • Mismatch between the model and the implementation • The model requires unbounded FIFO token queues per arc but the architecture provides storage for one token per arc • The architecture does not ensure FIFO order in the reuse of an operand slot • The static model does not support • Reentrant code • Function calls • Loops • Data Structures

  46. Data-Centric System Design CS 6501 Fundamental Concepts: Computing Models Samira Khan University of Virginia Sep 2, 2019 The content and concept of this course are adapted from CMU ECE 740

More Related