1 / 73

CS 620 Advanced Operating Systems

CS 620 Advanced Operating Systems. Lecture 1 - Introduction Professor Timothy Arndt BU 331. Operating Systems Review. An operating system is a program which is interposed between users and the hardware of a system

Télécharger la présentation

CS 620 Advanced Operating Systems

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CS 620 Advanced Operating Systems Lecture 1 - Introduction Professor Timothy Arndt BU 331

  2. Operating Systems Review • An operating system is a program which is interposed between users and the hardware of a system • An operating system can also be seen as a manager of resources (e.g. processes, files and I/O devices) • Examples include Windows 98, Windows XP, Mac OS X, Palm OS, and various varieties of UNIX (HP-UX, Linux, AIX, etc.)

  3. The Place of Operating Systems

  4. Early Operating Systems • Early computers were controlled directly by a programmer or computer operator who entered a job (given on a deck of punched cards) and collected the output (from a line printer). • Groups of jobs were placed together in a single deck giving rise to batch systems. • In order to minimize wasted time, a program called an operating system was developed. The program was always in the computer’s memory and controlled the execution of the jobs.

  5. Early Operating Systems

  6. Batch Operating Systems • In a batch system, various types of cards were put together in a deck • Job control cards contained cards often distinguished by a particular character in the first row, column position. These cards contained instructions for the OS written in the machine’s job control language (JCL) • Program source code (in a language such as FORTRAN) were compiled into an executable form. • Data cards contained the data needed for a run of the program.

  7. Batch Operating Systems

  8. Multiprogramming System • These systems were inefficient since the CPU was idle while a running program waited for (e.g.) a slow I/O operation to finish. • In order to get around this problem, several jobs were kept in memory, and when the running program was blocked waiting for I/O, the OS switched to one of the other jobs. This type of system is called a multiprogramming system. • Note that this type of system is still not interactive.The user must wait for his job to finish before he sees the output.

  9. Multiprogramming System

  10. Timesharing systems • As time went on, users were connected to the CPU via terminals, and the switching between jobs was fast enough that each user had the illusion of being the sole user of the system (if the system wasn’t overloaded!). This type of system is called a timesharing system. • In this type of system the OS’s scheme for control of the CPU must be much more complicated. In general, each job consists of one or more processes. • Processes can create other processes called child processes.

  11. Processes

  12. Processes • A running process consists of several segments in the computer’s main memory. • The text segment contains the processes executable machine instructions. • The global segment contains global and static (variables declared in a procedure whose value doesn’t change between invocations of the procedure) variables. • The data segment (or heap) contains dynamically allocated memory. • The stack contains activation records for subprograms. The activation records contain local variables, parameters, return location, etc. When a subprogram is called an activation record is pushed on the stack. When a subprogram exits, it’s record is popped off the stack.

  13. A Running Process

  14. Processes • Each process has its own virtual address space. The size of the space depends on the word size of the computer. • The virtual address space is in general larger than the computer’s physical memory, so some virtual memory scheme is needed. • The OS ensures the virtual to physical mapping for each process and also that each process can only access memory locations in its own address space.

  15. File System • Another important resource that the OS manages is the set of files that programs access. • The file system is typically structured as a tree in which leaf nodes are files and interior nodes are directories. • Even if files are on separate physical devices, they can be combined in a single virtual hierarchy. • The command used to add a new subtree (on a separate physical device) to the file system is called mount. • A single file (or directory) can be placed in multiple directories without copying the file by linking the file. A link is a special type of file.

  16. File System

  17. Virtual File Hierarchy

  18. Interprocess Communication • Separate processes can communicate with each other using an OS provided service called Interprocess Communication (IPC). • UNIX processes can use sockets or pipes to establish communication with other processes. • IPC and process spawning are relatively slow, leading to the idea of light-weight processes or threads in many modern OSs. • Threads spawned by the same process can communicate through shared memory.

  19. Interprocess Communication

  20. User vs. Kernel Mode • In order to make the system more stable, many OSs distinguish between processes operating in user mode versus those operating in kernel mode. • User mode processes can access only their memory locations in their own address space, cannot directly access hardware, etc. Kernel mode processes have no such restrictions. • The fewer processes which run in kernel mode, the more robust the system should be. On the other hand, mode switching slows down the system.

  21. User vs. Kernel Mode

  22. Other OS Requirements • Besides controlling files, processes and IPC, an OS has several other important tasks: • Controlling I/O and other peripheral devices • Networking • Security and access control for the various resources • Processing user commands

  23. Introduction to Distributed Systems • Distributed systems (as opposed to centralized systems) are composed of large numbers of CPUs connected by a high-speed network.

  24. Definition of a Distributed System • A distributed system is: • A collection of independent computers that appears to its users as a single coherent system.

  25. Introduction to Distributed Systems • Advantages of distributed systems: • Some applications involve spatially separated machines. • If one machine crashes, the system as a whole can still survive. • Computing power can be added in small increments. • Allow many users to share expensive peripherals like color printers. • Spread the workload over the available machines in the most cost effective way.

  26. Introduction to Distributed Systems • Disadvantages of distributed systems: • Less software exists for distributed systems. • The network can saturate or cause other problems. • Easy access also applies to secret data.

  27. Classification of Multiple CPU Systems • Distributed systems are multiple CPU systems (as are parallel systems - we don’t distinguish between these two). • In order to compare various multiple CPU systems, we would like a classification. • There is no completely satisfactory classification. The most well-known (but now outdated) is due to Flynn: • MIMD • Multiprocessor (shared memory) • Multicomputer (no shared memory)

  28. Introduction to Distributed Systems • SIMD (Single Instruction Multiple Data Stream) • SISD (Single Instruction Single Data Stream): traditional computers. • MISD (Multiple Instruction Stream Single Data Stream): No known examples. • Each of these categories can further be characterized as either: • bus or • switch • tightly coupled or • loosely coupled

  29. Hardware Concepts 1.6

  30. Bus-Based Multiprocessors • Bus-based multiprocessors consist of some number of CPUs all connected to a common bus, along with a memory module. • A typical bus has 32 or 64 address lines, 32 or 64 data lines, and 32 or more control lines, all operating in parallel. • Since there is just one memory, if one CPU writes a word of memory, and another CPU immediately reads that word, the value read will be that written. • The memory is said to be coherent. • This configuration soon causes the bus to be overloaded.

  31. Bus-Based Multiprocessors • A bus-based multiprocessor. 1.7

  32. Bus-Based Multiprocessors • The solution to this problem is to add a high-speed cache memory between the CPU and the bus. • The cache holds the most recently accessed words. All memory request go through the cache. • The probability that a requested word is found in the cache is called the hit rate. • If the hit rate is high, the bus traffic will be dramatically reduced.

  33. Bus-Based Multiprocessors • The use of a cache gives rise to the cache coherence problem. • One solution is the use of a write-through cache. • The caches must also the monitor the bus and invalidate cache entries when writes to that address occur. • This process is called snooping. • Most bus-based multiprocessors use these techniques.

  34. Bus-Based Multiprocessors • Facts about caches: • They can be write through, when the processor issues a store the value goes in the cache and also is sent to memory • They can be write back, the value only goes to the cache. • In this case, the cache line is marked dirty and when it is evicted it must be written back to memory. • Because of the broadcast nature of the bus it is not hard to design snooping caches that maintain consistency.

  35. Bus-Based Multiprocessors • Bus-based multiprocessors are also called Symmetric Multiprocessors (SMPs) and they are very common. • The bus can also connect multiple processors on the same die, leading to Multicore Systems • Disadvantage (this is really a limitation) • Cannot be used for a large number of processors since the bus bandwidth grows with the number of processors and this gets increasingly difficult and expensive to supply. • Moreover the latency grows as the number of processors for both speed of light and more complicated (electrical) reasons.

  36. Symmetric Multiprocessors Processor Processor Processor Processor Memory I/O LAN

  37. Two Non-SMP Multiprocessors Processor Private Memory Private Memory Processor Memory I/O LAN

  38. Two Non-SMP Multiprocessors Processor Processor Memory I/O LAN

  39. Dual-core Dual-Processor System

  40. Switched Multiprocessors • To build a multiprocessor with more than (say) 64 processors, a different method is needed to connect the CPUs with the memory. • One interconnection method is the use of a crossbar switch in which each CPU is connected to each interleaved bank of memory. • This method requires n2crosspoint switches for n memories and CPUs, which can be expensive for a large n.

  41. Switched Multiprocessors • A crossbar switch • An omega switching network 1.8

  42. Switched Multiprocessors • The omega network is an example of a multistage network which requires a smaller number of switches - (nlog2n)/2 with log2n switching stages. • The number of stages slows down memory access, however. • Another alternative is the use of hierarchical systems called NUMA (NonUniform Memory Access). • Each CPU accesses its own memory quickly, but everyone else’s more slowly.

  43. A CC-NUMA Proc 1 Proc 2 Proc 3 Proc 4 D D cache cache cache cache I/O Memory 0 Memory 1 I/O

  44. NUMA • Production of cc-NUMAs is well-supported by the AMD Opteron (e.g. by SGI). • Other cc-NUMA systems are being built using Intel’s Itanium processor with additional hardware support (e.g. by HP running HP-UX).

  45. NUMA • CC-NUMAs (Cache Coherent NUMAs) are programmed like SMPs but to get good performance: • must try to exploit the memory hierarchy and have most references hit in your local cache • most others must hit in the part of the shared memory in your "node” (CPUs on the local bus). • NC-NUMAs (Non Coherent NUMAs) are still harder to program as you must maintain cache consistent manually (or with compiler help).

  46. Bus-Based Multicomputers • Multicomputers (with no shared memory) are much easier to build (i.e. scale much more easily). • Each CPU has a direct connection to its own local memory. • The only traffic on the interconnection network is CPU-to-CPU, so the volume of traffic is much lower. • The interconnection can be done using a LAN rather than a high-speed backplane bus.

  47. Bus-Based Multicomputers • In some sense all the computers on the internet form one enormous multicomputer. • The interesting case is when there is some closer cooperation between processors; say the workstations in one distributed systems research lab cooperating on a single problem. • Application must tolerate long-latency communication and modest bandwidth, using current state-of-the-practice.

  48. Switched Multicomputers • The final class of systems are switched multicomputers. • Various interconnection networks have been proposed and built. Examples are grids and hypercubes. • A hypercube is an n-dimensional cube. For an n-dimensional hypercube, each CPU has n connections to other CPUs. • Hypercubes with 1000s of CPUs (Massively Parallel Processors or MPPs) have been available for several years.

  49. Switched Multicomputers • Grid • Hypercube 1-9

  50. Software Concepts • DOS (Distributed Operating Systems) • NOS (Network Operating Systems) • Middleware

More Related