1 / 117

Lecture 2

Lecture 2. Computer-System Architecture. Computer-System Architecture. Single-Processor Systems Multiprocessor Systems Clustered Systems. Computer-System Architecture. 1- Single general-purpose processor

ortez
Télécharger la présentation

Lecture 2

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Lecture 2

  2. Computer-System Architecture

  3. Computer-System Architecture • Single-Processor Systems • Multiprocessor Systems • Clustered Systems

  4. Computer-System Architecture 1- Single general-purpose processor • On a single-processor system, there is one main CPU capable of executing a general-purpose instruction set, including instructions from user processes. • Single-processor systems are the most common.

  5. Computer-System Architecture 2- Multiprocessors systems : aregrowing in use and importance • known as parallel systems or tightly-coupled systems. • The system have two or more processors in close communication and sharing the computer bus memory and peripheral devices. • Advantages include • Increased throughput • Economy of scale • Increased reliability – graceful degradation or fault tolerance

  6. Computer-System Architecture 2- Multiprocessors systems : Increased throughput: • By increasing the number of processors, we expect to get more work done in less time. • The speed-up ratio with N processors is not N x speed of each one, however,it is less than N x speed of each one. • When multiple processors cooperate on a task, a certain amount of overhead is incurred in keeping all the parts working correctly. This overhead, plus using the shared resources, low the expected gain from additional processors. • Similarly, N programmers working closely together do not produce N times the amount of work a single programmer would produce.

  7. Computer-System Architecture 2- Multiprocessors systems : Economy of scale: • Multiprocessor systems can cost less than equivalent multiple single-processor systems, because they can share peripherals, mass storage, and power supplies. If several programs operate on the same set of data, it is cheaper to store those data on one disk and to have all the processors share them than to have many computers with local disks and many copies of the data.

  8. Computer-System Architecture 2- Multiprocessors systems : Increased reliability: • If functions can be distributed properly among several processors, then the failure of one processor will not halt the system, only slow it down. • If we have ten processors and one fails, then each of the remaining nine processors can pick up a share of the work of the failed processor. Thus, the entire system runs only 10 percent slower, rather than failing altogether.

  9. Computer-System Architecture • Two types • Asymmetric Multiprocessing • Symmetric Multiprocessing 1- Asymmetric Multiprocessing • In which each processor is assigned to a specific task. • A master processor controls the system. • The other processors either look to the master for instruction (Master-slave relationship) or have predefined tasks.

  10. Computer-System Architecture 2- Symmetric Multiprocessing • The most common systems use symmetric multiprocessing (SMP). • Each processor performs all tasks within the operating system. • No master–slave relationship exists between processors • All processors are peers.

  11. Computer-System Architecture • The difference between symmetric and asymmetric multiprocessing may result from either hardware or software. • Special hardware can differentiate the multiple processors or the software can be written to allow only one master and multiple slaves. • For instance, Sun’s operating system SunOS Version 4 provided asymmetric multiprocessing, whereas Version 5 (Solaris) is symmetric on the same hardware.

  12. Symmetric Multiprocessing Architecture

  13. Computer-System Architecture • A recent trend in CPU design is to include multiple computing cores (a core is the basic computation (processing) unit of the CPU - it can run a single program context) on a single chip.

  14. Computer-System Architecture In the above figure, we show a dual-core design with two cores on the same chip. In this design, each core has its own register set as well as its own local cache; other designs might use a shared cache or a combination of local and shared caches

  15. Computer-System Architecture

  16. Computer-System Architecture • The CPU design may have multiprocessor cores per chip or multiple chips with single cores. • multiprocessor cores per chip is more efficient because one-chip communication is faster than multiple chips communication. • In addition, one chip with multiple cores uses significantly less power than multiple single-core chips. • Multicore systems are especially well suited for server systems such as database and Web servers.

  17. Clustered Systems 3- Clustered Systems • Definition : The clustered computers share storage and are closely linked via LAN networking. • Like multiprocessor systems, but multiple systems working together. • A clustered system uses multiple CPUs to complete a task. • It is different from a parallel system in that a clustered system consists of two or more individual systems tied together.

  18. Clustered Systems • Provides a high-availability: that is, a service will continue even if one or more systems in the cluster fail. Each node can monitor one or more nodes over the LAN. • The monitored machine can fail in some cases. • If the monitored machine fails, the monitoring machine can take ownership of its storage and restart the applications that were running on the failed machine. The users and clients of the applications see only a brief interruption of a service.

  19. Clustered Systems The clustered system can be of the following forms: • Asymmetric clustering:In this form, one machine is in hot standby mode and other machines are running the application. The hot standby machine performs nothing. It only monitors the server. It becomes the active server, if the server fails. • Symmetric clusteringIn this mode, two or more machines run the applications. They also monitor each other at the same time. This mode is more efficient because it uses all available machines. It can be used only if multiple applications are available to be executed. • High-Performance Computing (HPC) • Applications must be written to use parallelization.

  20. Operating System Structure

  21. Operating System Structure • Multiprogramming • Multiprocessing • Multitasking

  22. Operating System Structure Multiprogramming: • A single program cannot, in general, keep either the CPU or the I/O devices busy at all times • Multiprogramming increases CPU utilization by organizing jobs (code and data) so that the CPU always has one to execute. • Multiprogramming is a form of parallel processing in which several programs are run at the same time on a uniprocessor. • Since there is only one processor , there can be no true simultaneous execution of different programs. Instead, the operating system executes part of one program, then part of another, and so on. • To the user, it appears that all programs are executing at the same time.

  23. Operating System Structure Multiprogramming: • The idea is as follows: the operating system keeps several jobs in memory simultaneously. Since, in general, main memory is too small to accommodate all jobs, the jobs are kept initially on the disk in the job pool. • This pool consists of all processes residing on disk awaiting allocation of main memory. • The set of jobs in memory can be a subset of the jobs kept in the job pool. • The operating system picks and begins to execute one of the jobs in memory.

  24. Memory Layout for a Multiprogramming System • One job selected and run via job scheduling. • When it has to wait (for I/O for example), OS switches to another job

  25. Operating System Structure

  26. Operating System Structure • Note : • Multiprogramming means: that several programs in different stages of execution are coordinated to run on a single I-stream engine (CPU). • Multiprocessing, which is the coordination of the simultaneous execution of several programs running on multiple I-stream engines (CPUs).

  27. Operating System Structure • Timesharing (multitasking): is logical extension of multiprogramming in which a CPU switches between jobs so frequently that users can interact with each job while it is running, creating interactivecomputing.

  28. Operating System Structure • Time sharing requires an interactive (or hands-on) computer system, which provides direct communication between the user and the system. • The user gives instructions to the operating system or to a program directly, using a input device such as a keyboard or a mouse, and waits for immediate results on an output device. • Accordingly, the response time should be short—typically less than one second.

  29. Operating System Structure • A time-shared operating system allows many users to share the computer simultaneously. Since each action or command in a time-shared system tends to be short, only a little CPU time is needed for each user. • As the system switches rapidly from one user to the next, each user is given the impression that the entire computer system is dedicated to his use, even though it is being shared among many users.

  30. Operating System Structure • Response time should be < 1 second. • Each user has at least one program executing in memory process. • When a process executes, it typically executes for only a short time before it either finishes or needs to perform I/O. • Time sharing and multiprogramming require several jobs to be kept simultaneously in memory. If several jobs are ready to be brought into memory, and if there is not enough room for all of them, then the system must choose among them. Making this decision is job scheduling.

  31. Operating System Structure • In a time-sharing system, the operating system must ensure reasonable response time, which is sometimes accomplished through swapping, where processes are swapped in and out of main memory to the disk. • A more common method for achieving this goal is a virtual memory, a technique that allows the execution of a process that is not completely in memory.

  32. Operating System Structure • The main advantage of the virtual-memory scheme is that it enables users to run programs that are larger than actual physical memory.

  33. Computer-System Operation • Multiprogramming: the running task keeps running until it performs an operation that requires waiting for an external event (e.g. reading from a tape) or until the computer's scheduler forcibly swaps the running task out of the CPU. Multiprogramming systems are designed to maximize CPU usage. • Multitasking: is a method by which multiple tasks, also known as processes, share common processing resources such as a CPU. In the case of a computer with a single CPU, only one task is said to be running at any point in time, meaning that the CPU is actively executing instructions for that task.

  34. Computer-System Operation • Multitasking solves the problem by scheduling which task may be the one running at any given time, and when another waiting task gets a turn. The act of reassigning a CPU from one task to another one is called a context switch. • Multiprocessing: : Multiprocessing is a generic term for the use of two or more central processing units (CPUs) within a single computer system. There are many variations on this basic theme, and the definition of multiprocessing can vary with context, mostly as a function of how CPUs are defined (multiple cores on one die, multiple chips in one package, multiple packages in one system unit, etc.).

  35. Operating System Operations

  36. Operating System Operation • Each device controller is in charge of a particular device type (disk drive, video displays etc). • I/O devices and the CPU can execute concurrently. • Each device controller has a local buffer. • CPU moves data from/to main memory to/from local buffers • I/O is from the device to local buffer of controller. • Device controller informs CPU that it has finished its operation by causing an interrupt.

  37. Operating System Operation • To start an I/O operation (read from a key board), the device driver loads the appropriate registers within the device controller. • The device controller of a keyboard, in turn, examines the contents of these registers to determine what action to take (such as “read a character from the keyboard”). • The controller starts transferring of data from the keyboard to its local buffer. Once the transfer of data is completing, the device controller informs the device driver via an interrupt that it has finished its operation.

  38. Operating System Operations • Aninterrupt(Cut off) is a hardware or software -generated change-of-flow within the system. • Hardware interrupt, e.g. services requests data from I/O devices. • Software interrupt (trap), e.g. invalid memory access, division by zero, or system calls.

  39. Operating-System Operations • Each computer architecture has its own interrupt mechanism, but several functions are common: • When an interrupt occurs, the control is transferred to the interrupt service routine which is responsible for dealing with the interrupt. The interrupt service routine is generally accessed through an interrupt vector. An interrupt vector knows where to find the appropriate interrupt service routine for the current interrupt. • The interrupt architecture must save the address of the instruction that has been interrupted (the program counter).

  40. Operating-System Operations • Incoming interrupts must be disabled if there is an interrupt currently being processed. This is to prevent interrupts from being lost or overwritten by newly arriving interrupts. • An operating system is interrupt driven. This means that if there are no interrupts (no processes to execute, no I/O devices to service, and no users to whom to respond) an operating system will sit quiet, waiting for something to happen; i.e., the system will be idle. • The operating system must preserve the state of the CPU by storing the contents of the registers and the program counter.

  41. Operating-System Operations • The operating system must determine which type of interrupt has occurred. This can be done either by polling or by using a vectored interrupt system. • Polling is the systematic checking of each device to see if it was the device responsible for generating the interrupt. If the operating system has a vectored interrupt system, then the identity of the device and the type of interrupt will be easily identifiable without checking each device.

  42. Operating-System Operations • The operating system must provide a segment of code that specifies what action is to be taken in the event of an interrupt. There must be a code segment that is specific to each type of interrupt.

  43. Operating-System Operations Instruction Cycle with Interrupts • CPU checks for interrupts after each instruction. • If no interrupts, then fetch next instruction of current program. • If an interrupt is pending, then suspend execution of the current program. The processor sends an acknowledgement signal to the device that issued the interrupt so that the device can remove its interrupt signal. • Interrupt architecture saves the address of the interrupted instruction (and values of other registers).

  44. Operating-System Operations Instruction Cycle with Interrupts • Interrupt transfers control to the interrupt service routine (Interrupt Handler), generally through the interrupt vector, which contains the addresses of all the interrupt service routines. • Separate segments of code determine what action should be taken for each type of interrupt.

  45. Operating-System Operations Interrupt Handler • A program that determines nature of the interrupt and performs whatever actions are needed. • Control is transferred to this program. • Generally, it is a part of the operating system.

  46. Operating-System Operations: Interrupt Handling Procedure • Interrupt Handling Save interrupt information . OS determine the interrupt type (by polling). Call the corresponding handlers. Return to the interrupted job by the restoring important information (e.g., saved return address program counter).

  47. System Calls

  48. System Calls • System call: It is a mechanism used by an application for requesting a service from the operating system. • Examples of the services provided by the operating system are allocation and de-allocation of memory, reporting of current date and time etc. These services can be used by an application with the help of system calls. Many of the modern OSes have hundreds of system calls. For example Linux has 319 different system calls. 

  49. System Calls • System calls provide an interface to the services made available by an operating system. These calls are generally available as routines written in C and C++, although certain low-level tasks (for example, tasks where hardware must be accessed directly) may need to be written using assembly-language instructions.

More Related