1 / 51

FUNDAMENTALS OF COMPUTER DESIGN

FUNDAMENTALS OF COMPUTER DESIGN. Introduction. Today’ s desktop computers (less than $500 cost) are having more performance , larger memory and storage than a computer bought in 1985 for 1 million dollar.

xanti
Télécharger la présentation

FUNDAMENTALS OF COMPUTER DESIGN

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. FUNDAMENTALS OF COMPUTER DESIGN

  2. Introduction • Today’ s desktop computers (less than $500 cost) are having more performance, larger memory and storage than a computer bought in 1985 for 1 million dollar. • Highest performance microprocessors of today outperform Supercomputers of less than 10 years ago. • The rapid improvement has come both from advances in the technology used to build computers and innovations made in the computer design

  3. During the first 25 years of electronic computers, both forces made a major contribution, delivering performance improvement of about 25% per year. • Microprocessors were evolved during late 1970s and their ability along with improvements made in the Integrated Circuit (IC) technology contributed to 35% performance growth per year.

  4. Computer market place made it easier than ever before to be commercially successful with a new architecture. • First The virtual elimination of assembly language programming reduced the need for object-code compatibility. • Second The creation of standardized vendor-independent operating system such as UNIX and its clone , Linux, lowered the cost and risk of bringing out a new architecture.

  5. In the yearly 1980s, the Reduced Instruction Set Computer (RISC) based machines focused the attention of designers on two critical performance techniques, • The exploitation Instruction Level Parallelism (ILP) and the use of caches. The figure 1.1 shows the growth in processor performance since the mid 1980s. • The graph plots performance relative to the VAX-11/780 as measured by the SPECint benchmarks. From the figure it is clear that architectural and organizational. (Standard Performance Evaluation Corporation)

  6. enhancements led to 16 years of sustained growth in performance at an annual rate of over 50%. • Since 2002, processor performance improvement has dropped to about 22% per year due to the following hurdles: • 1.Maximum power dissipation of air-cooled chips • 2.Little ILP left to exploit efficiently • 3.Limitations laid by memory latency

  7. Classes of Computers • 1960: Large Main frames (Millions of $ ) (Applications: Business Data processing, large Scientific computing) • 1970:Minicomputers (Scientific laboratories, Time sharing concepts) • 1980: Desktop Computers (μPs) in the form of Personal computers and workstations. (Larger Memory, more computing power, Replaced Time sharing systems) • 1990:Emergence of Internet and WWW, PDAs(Personal digital assistant), emergence of high performance digital consumer electronics

  8. 2000:Cell phones These changes in computer use have led to three different computing classes each characterized by different applications, requirements and computing technologies growth in processor performance since 1980s

  9. Defining Computer Architecture • The computer designer has to Determine what attributes are important for a new computer, then design a computer to maximize performance and energy efficiency while staying within cost, power, and availability constraints. • The task has few important aspects such as Instruction Set design, Functional organization, Logic design and implementation. • The implementation may encompass integrated circuit design, packaging, power, and cooling.

  10. Instruction Set Architecture (ISA) • ISA refers to the actual programmer visible Instruction set. The ISA serves as boundary between the software and hardware. • The seven dimensions of the ISA are:

  11. i) Class of ISA: • Nearly all ISAs today are classified as General-Purpose-Register architectures. The operands are either Registers or Memory locations. • The two popular versions of this class are: • Register-Memory ISAs : ISA of 80x86, can access memory as part of many instructions. • Load -Store ISA Eg. ISA of MIPS, can access memory only with Load or Store instructions. • All recent ISAs are load-store.

  12. ii)Memory addressing: Byte addressing scheme is most widely used in all desktop and server computers. Both 80x86 and MIPS use byte addressing. iii)Addressing modes iv)Types and sizes of operands: 8 bit (ASCII character), 16 bit(Unicode character) 32 bit (Integer/word ),64 bit (long integer/ Double word) ,32 bit (IEEE-754 floating point),64 bit (Double precision floating point)

  13. v)Operations: The general category of operations are: Data Transfer,Arithmeticoperations,Logicoperations,Controloperations,MIPS ISA: simple & easy to implement. • vi)Control flow instructions: All ISAs support: Conditional & Unconditional Branches • vii) Encoding an ISA Fixed Length ISA Variable Length ISA MIPS 32 Bit long 80x86 (1-18 bytes) Simplifies decoding Takes less space

  14. Trends in Technology • The designer must be aware of the following rapid changes in implementation technology. • Integrated Circuit (IC) Logic technology • Memory technology (semiconductor DRAM technology) • Storage or magnetic disk technology • Network technology

  15. IC Logic technology: • Transistor density increases by about 35%per year. Increase in die size corresponds to about 10 % to 20% per year. The combined effect is a growth rate in transistor count on a chip of about 40% to 55% per year. • Semiconductor DRAM technology: capacity increases by about 40% per year. • Storage Technology: • Before 1990:the storage density increased by about 30% per year. • After 1990: the storage density increased by about 60 % per year. • Disks are still 50 to 100 times cheaper per bit than DRAM.

  16. Network Technology: • Network performance depends both on the performance of the switches and on the performance of the transmission system. • Performance trends: Bandwidth or throughput is the total amount of work done in given time. • Latency or response time is the time between the start and the completion of an event. (for eg. Millisecond for disk access)

  17. Trends in Power in Integrated Circuits • Energy and Power within a Microprocessor • For CMOS chips, the traditional primary energy consumption has been in switching transistors, also called dynamic energy. • The energy required per transistor is proportional to the product of the capacitive load driven by the transistor and the square of the voltage: • Energydynamic= Capacitive load x Voltage^2

  18. This equation is the energy of pulse of the logic transition of 0→1→0 or 1→0→1. The energy of a single transition (0→1 or 1→0) is then: • Energydynamic=1/2* Capacitive load x Voltage^2 • The power required per transistor is just the product of the energy of a transition multiplied by the frequency of transitions: • Power = (1/2)*Capacitive load* Voltage^2* Frequency switched

  19. dynamic power and energy are greatly reduced by lowering the voltage, so voltages have dropped from 5V to just under 1V in 20 years. • The capacitive load is a function of the number of transistors connected to an output and the technology, which determines the capacitance of the wires and the transistors.

  20. Example • Some microprocessors today are designed to have adjustable voltage, so a 15% reduction in voltage may result in a 15% reduction in frequency. What would be the impact on dynamic energy and on dynamic power? • Answer • Since the capacitance is unchanged, the answer for energy is the ratio of the voltages since the capacitance is unchanged:

  21. thereby reducing energy to about 72% of the original. For power, we add the ratio of the frequencies shrinking power to about 61% of the original.

  22. As we move from one process to the next, the increase in the number of transistors switching and the frequency with which they switch dominate the decrease in load capacitance and voltage, leading to an overall growth in power consumption and energy. • The first microprocessors consumed less than a watt and the first 32-bit microprocessors (like the Intel 80386) used about 2 watts, while a 3.3 GHz Intel Core i7 consumes 130 watts. Given that this heat must be dissipated from a chip that is about 1.5 cm on a side, we have reached the limit of what can be cooled by air.

  23. Distributing power ,removing heat and preventing hot spots have become increasingly difficult challenges. • 1. Do nothing Well: Most Microprocessors turn off the clock of Inactive modules to save energy and Dynamic power. Ex: If no floating point is executing then that unit is disabled.

  24. 2. Dynamic voltage Frequency Scaling(DVFS): Its comes from formula, Personal mobile devices , laptops, servers have periods of low activity where there is no need to operate at highest clock frequency and voltages. Modern Processors offer few clock frequencies and voltage in which to operate that use lower power and energy.

  25. 3.Design of typical case: PMD, laptops are often idle , memory , storage offer low power mode to save energy. Ex: DRAM have series of increasing lower modes to extend battery life. • 4. Overclocking, turning off cores

  26. Trends in Cost • In the past 30 years, the use of technology improvements to lower cost, as well as increase performance, has been a major theme in the computer industry. • Major factors that influence the cost of a computer • The Impact of Time, Volume, and Commoditization • Cost of an Integrated Circuit

  27. The cost of a manufactured computer component decreases over time even without major improvements in the basic implementation technology. • The underlying principle that drives costs down is the learning curve—manufacturing costs decrease over time. • Understanding how the learning curve improves yield is critical to projecting costs over a product’s life. • One example is that the price per megabyte of DRAM has dropped over the long term.

  28. Microprocessor prices also drop over time, but, because they are less standardized than DRAMs. • Volume is a second key factor in determining cost. Increasing volumes affect cost in several ways. • some designers have estimated that cost decreases about 10% for each doubling of volume.

  29. Commodities are products that are sold by multiple vendors in large volumes and are essentially identical. • Virtually all the products sold on the shelves of grocery stores are commodities, as are standard DRAMs, Flash memory, disks, monitors, and keyboards. • In the past 25 years, much of the personal computer industry has become a commodity business focused on building desktop and laptop computers running Microsoft Windows.

  30. Cost of an Integrated Circuit • In an increasingly competitive computer marketplace where standard parts—disks, Flash memory, DRAMs, and so on—are becoming a significant portion of any system’s cost, integrated circuit costs are becoming a greater portion of the cost that varies between computers, especially in the high-volume, cost-sensitive portion of the market.

  31. Although the costs of integrated circuits have dropped exponentially, the basic process of silicon manufacture is unchanged: A wafer is still tested and chopped into dies that are packaged. • Cost of IC = Cost of [die+ testing die+ Packaging and final test] / (Final test yield)

  32. we focus on the cost of dies, summarizing the key issues in testing and packaging at the end. • Learning how to predict the number of good chips per wafer requires first learning how many dies fit on a wafer and then learning how to predict the percentage of those that will work. From there it is simple to predict cost:

  33. The most interesting feature of this first term of the chip cost equation is its sensitivity to die size, shown below. • The number of dies per wafer is approximately the area of the wafer divided by the area of the die. It can be more accurately estimated by

  34. The first term is the ratio of wafer area (πr2) to die area. The second compensates for the “square peg in a round hole” problem—rectangular dies near the periphery of round wafers. • Dividing the circumference (πd) by the diagonal of a square die is approximately the number of dies along the edge.

  35. This 300 mm wafer contains 280 full Sandy Bridge dies, each 20.7 by 10.5 mm in a 32 nm process.

  36. Dependability • dependability is a measure of a system's availability, reliability, and its maintain ability . This may also encompass mechanisms designed to increase and maintain the dependability of a system. • Reliability, Availability and Serviceability (RAS) is a set of related attributes that must be considered when designing, manufacturing, purchasing or using a computer product or component.

  37. Today RAS is relevant to software as well and can be applied to Hardwares, network’s, application program’s, operating systems ( OS’s), personal computers(PC’s), server s and supercomputer’s. • The term reliability refers to the ability of a computer-related hardware or software component to consistently perform according to its specifications. • In theory, a reliable product is totally free of technical errors. In practice, vendors commonly express product reliability as a percentage.

  38. Availability is the ratio of time a system or component is functional to the total time it is required or expected to function. • This can be expressed as a direct proportion (for example, 9/10 or 0.9) or as a percentage (for example, 90%). • Sometimes availability is expressed in qualitative terms, indicating the extent to which a system can continue to work when a significant component or set of components goes down.

  39. Serviceability is an expression of the ease with which a component, device or system can be maintained and repaired. • Early detection of potential problems is critical in this respect. Some systems have the ability to correct problems automatically before serious trouble occurs; • examples include built-in features of OSs such as Microsoft Windows XP and auto-protect-enabled anti-virus software and spyware detection and removal programs.

  40. Summarizing Performance • The user of a desktop computer may say a computer is faster when a program runs in less time. • The computer user is interested in reducing response time—the time between the start and the completion of an event—also referred to as execution time. • The operator of a warehouse-scale computer may be interested in increasing throughput—the total amount of work done in a given time.

  41. In comparing design alternatives, we often want to relate the performance of two different computers, say, X and Y. The phrase “X is faster than Y” is used here to mean that the response time or execution time is lower on X than on Y for the given task. In particular, “X is n times faster than Y” will mean:

More Related