1 / 44

COMPUTER ARCHITECTURE

COMPUTER ARCHITECTURE. 1. Introduction by Dr. John Abraham University of Texas- Panam. Computer Architecture. design of computers instruction sets hardware components system organization two parts Instruction set architecture (ISA) hardware-system architecture (HAS).

keisha
Télécharger la présentation

COMPUTER ARCHITECTURE

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. COMPUTER ARCHITECTURE 1. Introduction by Dr. John Abraham University of Texas- Panam

  2. Computer Architecture • design of computers • instruction sets • hardware components • system organization • two parts • Instruction set architecture (ISA) • hardware-system architecture (HAS)

  3. Instruction set architecture (ISA) • includes specifications that determine how machine-language programmers will interact with the computer • A computer is generally viewed in terms of ISA • which determines the computational characteristics of the computer

  4. Hardware-System architecture (HSA) • Deals with the computer’s major hardware subsystems • CPU • I/O • HAS includes both the logical design and dataflow organization of these components • HAS determines the efficiency of the machine

  5. computer Family architecture • PCs come with varying HSAs • But they all have the same ISA • A computer family is a set of implementations that share the same or similar ISA.

  6. IBM System/360 family architecture • introduced in in the 60s • Models 20,30,40,44,50,65 and 91 • Different amount of memory, speed, storage, etc. • All ran the same software

  7. Other families • DEC PDP-8 family 1965, PDP-11 1970, VAX-11 family 1978 • CDC 6000 family 1960s, CYBER 170 series in the 70s • IBM System /370 family 1970s • IBM Enterprise System Architecture/370, 1988

  8. Compatibility • Ability of different computers to run the same programs • Upward compatibility • High end computers of the same family can run programs written for low end family members • Downward compatibility • Not always possible, since software written for higher end machines may not run on low end machines

  9. History • First Generation • Second Generation • Third Generation • Fourth Generation

  10. First Generation • One of kind laboratory machine • ENIAC • built by Eckert and Mauchly, consultant: John von Neumann • Not a stored program computer • EDSAC, EDVAC • MARK-1..MARK-IV • Howard Eiken

  11. First Generation cont. • Used vacuum tubes and electromechanical relays • First commercial product - UNIVAC • tens of thousand vacuum tubes consumed much power • Produced lot of heat

  12. Second Generation • Transistor invented in 1948 • John Bardeen, Walter Brattain and William Schockley of Bell Labs • Consumes much less power than vacuum tubes • smaller and more reliable than vacuum tubes • Magnetic-core memory • donut shaped magnetic elements • provided reliability and speed • 1 megabyte core memory cost 1 million dollars

  13. Second Generation cont. • 1950s and early 60s. • Batch processing to maximize CPU • Multiprogramming operating systems were introduced • Operating system loads different programs into non-overlapping parts of memory • Burroughs introduced execution-stack artitecture • uses stack as an integral part of the CPU • provided hardware support for high level languages

  14. Third Generation • 1963-1975 • small scale integration (solid-state) • Medium scale integration • Core memory • Mini computers

  15. Fourth generation • Intel’s first chip in 1973 • VLSI • Micro computers • solid state memory • Inexpensive storage

  16. More speed is needed • weather forecasting • molecular modeling • electronic design • seismic prspecting

  17. How to achieve more speed • Processor arrays • Useful for array manipulations • CPU intensive repetitive operations • Pipelining • Assembly line fashion • Several instructions are worked on simultaneously - all at different stages

  18. Achieving higher speed contd. • RISCs (reduced instruction set computers) • as opposed to complex-instruction-set computers (CISCs) • Multiprocessor computers • Many separate processors. • Alternative architectures • neural networks, dataflow, demand-driven, etc.

  19. Classification of Computer Architectures • Von Neumann Machines • Non-von Neumann Machines

  20. Von Neumann Machines • Hardware has CPU, main memory and IO system • Stored program computer • sequential instruction operation • Single path between CPU and main memory (bottleneck)

  21. Modifications of Von Neuman machines • Harvard architectures • provides independent pathways for data address, data, instruction address, and instructions • Allow CPU to access instruction and data simultaneously

  22. CPU • Control unit • ALU • registers • program counter

  23. Instructions • instructions are stored as values in memory • These values tell the computer what operation to perform • Every instruction has a set of fields • these fields provide specific details to control unit • Instruction format - the way the fields are laid out

  24. Instructions contd. • Instruction size - how many bytes needed. • Operands - data for the operation • Opcode - numeric code representing the instruction • Instruction set - Each CPU has a specific set of instruction it is able to execute • A program is a sequence of instructions

  25. Instruction contd. • Each instruction in a program has a logical address. • Each instruction as a physical address depending on where in memory it is stored. • The sequences of instruction to execcute is called a instruction stream • To keep track of instruction in memory PC is used

  26. von Neumann machine cycle • instruction fetch • instruction execution • After each fetch the PC points to the next physical address

  27. Flynn classification • SISD - single instruction stream, single data stream (von Neumann computer) • The rest are non-von Neumann classification • SIMD - Single instruction stream, multiple data stream. Multiple Cus. One CU controls all other Cus • Processor arrays fall into this category

  28. Flynn classification contd. • MISD - multiple instruction stream, single data stream. No use for this type. • MIMD - Multiple instruction stream, multiple data stream. • Multiprocessors. More than one independent processor.

  29. Parallel processors • Both SIMD and MIMD machines are called parallel processors • They operate in parallel on more than one datum at a time.

  30. Classification of parallel processors • based on memory organization • Global Memory (GM) One global memory is shared by all processors • current high performance computers have this type of memory • Local-memory (LM) Each processor has its own memory. • They share data through a common memory area

  31. SIMD machine characteristics • They distribute processing over a large amount of hardware • They operate concurrently on many different data elements • They perform the same computation on all data elements • One CU(control unit) and many PEs (processing elements)

  32. MIMD machine characteristics • Distribute processing over a number of independent processors • share resources including memory • Each processor operates independently and concurrently • Each processor runs its own program • tightly or loosely coupled

  33. Category examples • SISD (RISC) Uniprocessor MIPS R2000, SUN SPARC, IBM R6000 • SISD (CISC) Uniprocessor IBM PC, DEC PDP-11, VAX-11 • GM-SIMD processor array Burroghs BSP • LM-SIMD Processor array ILLIAC IV, MPP, CM-1 • GM-MIMD Multiprocessor DEC and IBM tightly coupled • LM-MIMD Multiple processor Tandem/16, iPSC/2

  34. Measuring Quality of a computer architecture • Generality • Applicability • Efficiency • Ease of Use • Malleability • Expandability

  35. Generality • Range of applications that can be run on a particular architecture • Generality tends to increase the complexity of application implementations • The more complex a design fewer clones will be made of it.. (Good/Bad?)

  36. Applicability • Utility of architecture for what it was intended for • Scientific and Engineering applications • computation intensive • General commercial applications

  37. Efficiency • Measure of the average amount of hardware that remains busy during normal computer use. • Because of the low cost of hardware now, efficiency is considered very important.

  38. Ease of use • Ease with which system programs can be developed

  39. Malleability • Ease with which computers in the same family can be implemented using this architecture • Example- machines that differ in size and performance

  40. Expandability • How easy is to increase the capabilities of an architecture. • Increase number of devices? Make larger devices?

  41. Factors influencing the success of an architecture • Architectural merit • Open/closed architecture • System performance • System Cost

  42. Architectural merit • Measured by: • applicability • Malleability • Expandability • Compatibility

  43. Open/closed architecture • Example of Open: IBM PC • Example of Closed: Apple

  44. System Performance • Speed of the computer • Benchmark tests • Linpack, Livermore loops, Whetstone, SPEC • Matrics • MIPS, MFLOPS, GFLOPS • Clock ticks per instruction • I/O speed • bandwidth and megabits per second

More Related