1 / 72

HIGH PERFORMANCE COMPUTING : MODELS, METHODS, & MEANS OPERATING SYSTEMS 1

Prof. Thomas Sterling Center for Computation & Technology Louisiana State University April 5 th , 2011. HIGH PERFORMANCE COMPUTING : MODELS, METHODS, & MEANS OPERATING SYSTEMS 1. This Page Left Intentionally Blank. Opening Remarks: Where are We??. The two ends: what we’ve covered so far

ahanu
Télécharger la présentation

HIGH PERFORMANCE COMPUTING : MODELS, METHODS, & MEANS OPERATING SYSTEMS 1

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Prof. Thomas Sterling Center for Computation & Technology Louisiana State University April 5th, 2011 HIGH PERFORMANCE COMPUTING: MODELS, METHODS, & MEANSOPERATING SYSTEMS 1

  2. This Page Left Intentionally Blank

  3. Opening Remarks: Where are We?? • The two ends: what we’ve covered so far • From the top down: • User applications • Parallel programming methods • Algorithms for distributed computing • From the bottom up: • Enabling device technologies • Micro architectures • Parallel system architectures • Performance as cross cutting theme • We’re now at the system center: • The Operating System • It owns the computer • It controls the applications • It facilitates your needs but limits your access • It protects you from others, and they from you

  4. Opening Remarks: Where are We Going • Next two lectures are on OS • Principles • Linux components • Middleware • Practical System Usage (next week) • Scheduling • Check pointing • System Administration • Beyond and Beyond • You need to know what you don’t know • Field of HPC beyond this Introduction course • Future of HPC over the next decade

  5. Topics • Introduction • Operating System Structures & Services • Process Management • Threads • Memory Management • Security & Protection • Modern Operating Systems • Command-line Interpreter System • Unix • Linux Introduction • Summary Materials for Test

  6. Topics • Introduction • Operating System Structures & Services • Process Management • Threads • Memory Management • Security & Protection • Modern Operating Systems • Command-line Interpreter System • Unix • Linux Introduction • Summary Materials for Test

  7. Operating System • What is an Operating System? • A persistent program that controls the execution of application programs • An interface between applications and hardware • Primary functionality • Exploits the hardware resources of one or more processors • Provides a set of services to system users • Manages secondary memory and I/O devices • Objectives • Convenience: Makes the computer more convenient to use • Efficiency: Allows computer system resources to be used in an efficient manner • Reliability: through protection between jobs • Ability to evolve: Permit effective development, testing, and introduction of new system functions without interfering with service Source: William Stallings “Operating Systems: Internals and Design Principles (5th Edition)”

  8. Layers of Computer System

  9. Resources Managed by the OS • Processor • Main Memory • volatile • referred to as “main memory” or “primary storage” • Also “physical memory” or “core” • I/O modules • secondary memory devices • communications equipment • terminals • System bus • communication among processors, memory, and I/O modules

  10. OS as Resource Manager Computer System I/O Devices Memory System bus I/O Controller Printers, Operating keyboards, System digital camera, Software I/O Controller etc. Programs and Data I/O Controller Processor Processor Storage OS Programs Data

  11. Topics • Introduction • Operating System Structures & Services • Process Management • Threads • Memory Management • Security &Protection • Modern Operating Systems • Command-line Interpreter System • Unix • Linux Introduction • Summary Materials for Test

  12. Operating System Structure • Operating system can be examined in various ways : • Disassembling system components & their interconnections • Services provided by different components of an OS • Interfaces that it makes available to users and programmers • System Components • Process Management • Main-Memory Management • File Management • I/O System Management • Secondary Storage Management • Networking • Protection & Security Systems

  13. Operating System Components: Overview Process Management: Creating and deleting user and system processes, suspending and resuming processes, mechanisms for process communication, process synchronization, & deadlock handling Memory Management: managing usage of memory, loading processes into memory, allocation and de-allocation of memory File Management: Creating and deleting files, creating and deleting directories, manipulating files and directories, mapping to secondary storage etc. I/O System Management: buffering, caching, spooling, general device driver interfaces drivers for specific hardware devices Secondary Storage Management: Free space management, storage allocation, disk scheduling Networking: Communication drivers, protocols Protection & Security systems: Controlling access of programs, processes, or users to the resources defined by the computer system.

  14. Operating System Services Operating system provides an environment to execute programs. Following are some of the services an Operating System provides : Program execution: ability to load a program into memory and execute the program. Program must be able to end execution normally or abnormally. I/O operations: help in input/output operations to a file or an I/O device. File System Manipulation: read, write, modify, create, delete files by name. Communication: facilitate exchange of information between processes through shared memory or message passing. Error detection and handling: Monitor for potential errors in CPU, memory hardware, I/O devices, external devices, potential errors in user programs such as access to illegal memory location etc.

  15. Multiprogramming & Multitasking • Multiprogramming needed for efficiency • Single user cannot keep CPU and I/O devices busy at all times • Multiprogramming organizes jobs (code and data) so CPU always has one to execute • A subset of total jobs in system is kept in memory • One job selected and run via job scheduling • When it has to wait (for I/O for example), OS switches to another job • Timesharing (multitasking) is logical extension in which CPU switches jobs so frequently that users can interact with each job while it is running, creating interactive computing • Response time should be < 1 second • Each user has at least one program executing in memory process • If several jobs ready to run at the same time  CPU scheduling • If processes don’t fit in memory, swapping moves them in and out to run • Virtual memory allows execution of processes not completely in memory

  16. Multiprogramming and Multiprocessing

  17. Topics • Introduction • Operating System Structures & Services • Process Management • Threads • Memory Management • Security & Protection • Modern Operating Systems • Command-line Interpreter System • Unix • Linux Introduction • Summary Materials for Test

  18. Process Management • A process is a program in execution. It is a unit of work within the system. Program is a passive entity, process is an active entity. • Process needs resources to accomplish its task • CPU, memory, I/O, files • Initialization data • Process is composed of • program counter (points into code section) • process stack (contains temporary data local variables, return addresses) • code section (executable code) • data section (contains global variables) • Process termination requires reclaim of any reusable resources • Single-threaded process has one program counter specifying location of next instruction to execute • Process executes instructions sequentially, one at a time, until completion • Multi-threaded process has one program counter per thread • Typically system has many processes, some user, some operating system running concurrently on one or more CPUs • Concurrency by multiplexing the CPUs among the processes / threads

  19. Process States As a process executes it changes state between one of the following: New: A process is being created Running: Instructions are being executed Waiting: The process is waiting for some event to occur Ready: Process is waiting to be assigned to a processor Terminated: The process has finished execution

  20. Process Control Block • Each process in the operating system is represented as a Process Control Block which contains : • Process state: current state of the process (new / ready / running / waiting / halted…) • Program counter: The address of next instruction to be executed for the process • CPU registers: Registers vary in number and type depending on architecture. They include accumulators, index registers, stack pointers, and general purpose registers etc • CPU scheduling information: process priority, pointers to scheduling queues and other parameters • Memory Management information: value of base and limit registers, page tables, segment tables etc. • Accounting information: amount of CPU & real-time used, time limits, job or process numbers • I/O Status: list of I/O devices allocated to the process, list of open files etc

  21. Process Management Activities The operating system is responsible for the following activities in connection with process management: • Creating and deleting both user and system processes • Suspending and resuming processes • Providing mechanisms for process synchronization • Providing mechanisms for process communication • Providing mechanisms for deadlock handling

  22. Process Scheduling • When a process enters the system, they are put into a job queue. The queue consists of all processes in the system. • Processes that are residing in the main memory and are ready and waiting to execute are kept in a list called the ready queue (usually a linked list) • A ready queue header contains pointers to the first and final PCBs in the list. • List of processes waiting for a particular I/O device is called a device queue. (each device has its own queue) • A process is initially put into the ready queue where it waits until it is selected for execution. Once a process is assigned to the CPU for execution one of the following could occur: • Process could issue an I/O request and be placed in the device queue • Process could create new sub-processes and wait for its termination • Processes could be removed forcibly from the CPU, as a result of an interrupt and could be put back in the ready queue.

  23. Process Schedulers The OS must select for scheduling purposes, processes from various scheduling queues throughout the lifetime of a process. The selection process is carried out by the appropriate scheduler. Often more processes are submitted than can be executed immediately, these processes are spooled to a mass storage device. The long-term scheduler or job-scheduler selects processes from this pool and loads them into memory for execution. The short-term scheduler selects from among the processes that are ready to execute and allocates CPU to one of them.

  24. Process Scheduling In general, a process can be described as I/O bound or CPU bound. I/O bound process spends more time doing I/O than doing computations CPU bound process generates I/O requests infrequently and spends more time doing computation. Long term scheduler must schedule a good process mix of I/O bound & CPU-bound processes. Some operating systems, may introduce an additional, intermediate level of scheduling: medium-term scheduler which remove process from memory and reintroduce it into memory at some later time and its execution can be continued where it left off. This scheme is called swapping

  25. Processes: Context Switch Switching the CPU to another process requires saving the state of the old process and loading the saved state of the new process, this task is called Context Switch. The context of a process is represented in the PCB of a process; it includes value of CPU registers, process state and memory management information. When a context switch occurs, the kernel saves the context of the old process in its PCB and loads the saved context of the new process scheduled to run. Context switching time is pure overhead, because the system does no useful work while switching. Context switching time is dependent on various factors including; memory speed, number of registers, existence of special instructions, type of machine.

  26. Topics • Introduction • Operating System Structures & Services • Process Management • Threads • Memory Management • Security & Protection • Modern Operating Systems • Command-line Interpreter System • Unix • Linux Introduction • Summary Materials for Test

  27. Threads • Threads are sometimes called lightweight processes (LWP), are a basic unit of CPU utilization • It comprises a threadID, a program counter, a register set, and a stack. A process may have one or more threads of control. • User threads are supported above the kernel and implemented by a thread library at the user level. • The library provides support for thread creation, scheduling, and management with no support from the kernel. • Therefore user threads are generally fast to create and manage • Eg: C-threads, UI-threads • Kernel threads are supported directly by the operating system • Performs thread creation, scheduling, management in kernel space. • Kernel threads are generally slower to create and manage due to management overhead of the operating system • Eg: Pthreads

  28. Multi-threading models Many systems provide support for both kernel and user threads resulting in multithreading models. Three common types of threading implementations are :

  29. CPU Scheduling • The objective of multiprogramming is to have some process running at all time, in order to maximize CPU utilization. • Scheduling is a fundamental operating system function; almost all resources are scheduled before use. CPU being the primary resource; CPU scheduling has significant impact on OS design & operation • CPU-I/O Burst Cycle: • Process execution consists of a cycle of CPU execution and I/O wait; processes alternate between these two states • Process execution begins with a CPU burst; followed by an I/O burst followed by another CPU burst and then another I/O burst and so on • The last CPU burst ends with a system request to terminate execution

  30. Topics • Introduction • Operating System Structures & Services • Process Management • Threads • Memory Management • Security & Protection • Modern Operating Systems • Command-line Interpreter System • Unix • Linux Introduction • Summary Materials for Test

  31. Memory Management • Memory management determines what is in memory and when • All needed (accessed) data in memory • All needed (executed) instructions in memory in order to execute • Address translation tables • Optimizing CPU utilization and computer response to users • Memory management activities • Keeping track of which parts of memory are currently being used and by whom • Deciding which processes (or parts thereof) and data to move into and out of memory • Allocating and deallocating memory space as needed • Virtual to physical address translation

  32. Virtual Memory • Virtual Memory : • Allows programmers to address memory from a logical point of view • No hiatus between the execution of successive processes while one process was written out to secondary store and the successor process was read in • Virtual Memory & File System : • Implements long-term store • Information stored in named objects called files • Paging : • Allows process to be comprised of a number of fixed-size blocks, called pages • Virtual address is a page number and an offset within the page • Each page may be located any where in main memory • Page translation table in memory

  33. Translation Lookaside Buffer

  34. Paging Diagram

  35. Storage Management • OS provides uniform, logical view of information storage • Abstracts physical properties to logical storage unit - file • Each medium is controlled by device (i.e., disk drive, tape drive) • Varying properties include access speed, capacity, data-transfer rate, access method (sequential or random) • File-System management • Files usually organized into directories • Access control on most systems to determine who can access what • OS activities include • Creating and deleting files and directories • Primitives to manipulate files and directories • Mapping files onto secondary storage • Backup files onto stable (non-volatile) storage media

  36. Scheduling and Resource Management • Fairness • Give equal and fair access to resources • Differential responsiveness • Discriminate among different classes of jobs • Efficiency • Maximize throughput, minimize response time, and accommodate as many uses as possible

  37. Topics • Introduction • Operating System Structures & Services • Process Management • Threads • Memory Management • Security & Protection • Modern Operating Systems • Command-line Interpreter System • Unix • Linux Introduction • Summary Materials for Test

  38. Protection and Security • Protection – any mechanism for controlling access of processes or users to resources defined by the OS • Security – defense of the system against internal and external attacks • Huge range, including denial-of-service, worms, viruses, identity theft, theft of service • Systems generally first distinguish among users, to determine who can do what • User identities (user IDs, security IDs) include name and associated number, one per user • User ID then associated with all files, processes of that user to determine access control • Group identifier (group ID) allows set of users to be defined and controls managed, then also associated with each process, file • Privilege escalation allows user to change to effective ID with more rights

  39. OS Kernel • Kernel: • Portion of operating system that is in main memory • Contains most frequently used functions • Also called the “nucleus”, “supervisor”, “monitor” • Hardware Features: • Memory protection: Do not allow the memory area containing the kernel to be altered • Timer: Prevents a job from monopolizing the system • Privileged instructions: Certain machine level instructions can only be executed by the kernel • Interrupts: Early computer models did not have this capability • Memory Protection • User program executes in user mode • Certain instructions may not be executed • Kernel executes in system mode • Kernel mode • Privileged instructions are executed • Protected areas of memory may be accessed

  40. Topics • Introduction • Operating System Structures & Services • Process Management • Threads • Memory Management • Security & Protection • Modern Operating Systems • Command-line Interpreter System • Unix • Linux Introduction • Summary Materials for Test

  41. Modern Operating Systems • Small operating system core • Contains only essential core operating systems functions • Many services traditionally included in the operating system are now external subsystems • Device drivers • File systems • Virtual memory manager • Windowing system • Security services • Microkernel architecture • Assigns only a few essential functions to the kernel • Address spaces/basic memory management • Interprocess communication (IPC) • Basic scheduling

  42. Modern Operating Systems • Multithreading • Process is divided into threads that can run concurrently • Thread • Dispatchable unit of work • executes sequentially and is interruptable • Process is a collection of one or more threads • Symmetric multiprocessing (SMP) • There are multiple processors • These processors share same main memory and I/O facilities • All processors can perform the same functions • Distributed operating systems • Provides the illusion of a single main memory space and single secondary memory space • Object-oriented design • Used for adding modular extensions to a small kernel • Enables programmers to customize an operating system without disrupting system integrity

  43. Thread and SMP Management Example: Solaris Multithreaded Architecture

  44. Benefits of a Microkernel Organization • Uniform interface on request made by a process • Don’t distinguish between kernel-level and user-level services • All services are provided by means of message passing • Extensibility • Allows the addition of new services • Flexibility • New features added & existing features can be subtracted • Portability • Changes needed to port the system affect only the microkernel itself • Reliability • Modular design • Small microkernel can be rigorously tested • Distributed system support • Message are sent without knowing what the target machine is • Object-oriented operating system • Uses components with clearly defined interfaces (objects)

  45. Monolithic OS vs. Microkernel

  46. Topics • Introduction • Operating System Structures & Services • Process Management • Threads • Memory Management • Security & Protection • Modern Operating Systems • Command-line Interpreter System • Unix • Linux Introduction • Summary Materials for Test

  47. Command – Interpreter System Command interpreter: is the interface between the user and the operating system Some operating systems include the command interpreter in the kernel while others such as MSDOS and UNIX treat command interpreter as a special program Commands are given to the operating system by a control statement issued by a user (eg: ls, rm, mv). These control statements are interpreted by a command interpreter often known as the shell; whose main function is to get the next command statement and execute it Some operating systems offer graphical interfaces (GUIs) to perform the same operations (Windows Interface, Gnome/KDE in Linux)

  48. DEMO We won‘t do this today • Demonstrate common commands used to interact with the system • Linux • Windows

  49. Topics • Introduction • Operating System Structures & Services • Process Management • Threads • Memory Management • Security & Protection • Modern Operating Systems • Command-line Interpreter System • Unix • Linux Introduction • Summary Materials for Test

  50. Brief History of UNIX Initially developed at Bell Labs in late 1960s by a group including Ken Thompson, Dennis Ritchie and Douglas McIlroy Originally named Unics in contrast to Multics, a novel experimental OS at the time The first deployment platform was PDP-7 in 1970 Rewritten in C in 1973 to enable portability to other machines (most notably PDP-11) – an unusual strategy as most OS’s were written in assembly language Version 6 (version numbers were determined by editions of system manuals), released in 1976, was the first widely available version outside Bell Labs Version 7 (1978) is the ancestor of most modern UNIX systems The most important non-AT&T implementation is UNIX BSD, developed at the UC at Berkeley and to run on PDP and VAX By 1982 Bell Labs combined various UNIX variants into a single system, marketed as UNIX System III, which later evolved into System V

More Related