250 likes | 536 Vues
Course Objectives. Provide an introduction to the basic components of a modern operating system: operating system structure process management memory management file systems security Demonstrate how applications are dependent on the facilities provided by the operating system.
E N D
Course Objectives • Provide an introduction to the basic components of a modern operating system: • operating system structure • process management • memory management • file systems • security • Demonstrate how applications are dependent on the facilities provided by the operating system.
Course Content • The course will stress: • Concepts (OS Ideas) • Structure (OS Organization) • Mechanisms (OS Functionality) • Design tradeoffs: • If we solve an OS problem using some particular design approach, what is the impact on cost, execution time, storage space, or some other aspect of the design? • How does cost, execution time, etc. change when we alter the design?
What is an Operating System? • An Operating System (OS) provides an environment for the application programs that use the resources of a computer. • The resources of the computer include: • the processor • memory • I/O devices • files • As an environment the OS provides the following functionality: • a convenient interface between the user of a computer and the computer hardware • an efficient manager of computer resources • a base from which the system can evolve to work with new hardware and software modules. • The next slides consider these functions:
The User/Computer Interface • The user works with the hardware of the computer by relying on several intermediate layers: • application programs • utilities • the operating system • The OS presents to the upper layers a virtual machine that can be used via the services that it provides: • program execution (loading, initialization, …) • access to I/O (initiation and control of devices) • access to files (file formats, organization, …) Also: • program creation (editors, debuggers, compilers) • system access (resource protection and sharing) • error detection and response (hardware errors, software bugs) • accounting (tracking resource usage, billing).
How Are Services Provided? • Note: the OS is at work well before any user program asks for a service: • When the computer is turned on, the hardware executes a bootstrap loader in a ROM • After a self-test of the hardware, the loader reads the OS in from a disk. • The OS finishes testing and initializing the hardware and starts a command interpreter or GUI that waits for the user application to issue a “system call”. • The “syscall” will: • switch the hardware to kernel mode • force a branch to a particular location in the memory containing OS code that will service the syscall. • CPU registers are saved so that the user program can later be restarted.
Execution of the SysCall Exception Vectors in low address memory OS . . . Exception Handler User Program SysCall instruction . . .
The User/Computer Interface • The OS extends the machine. • It acts as an intermediary between user applications and the computer hardware. • In a sense it creates a virtual machine for the user’s application program. User Applications and Utilities OS Hardware
The OS as Resource Manager Resources: • The processor • Work to be done is established in processes and threads of execution. • A multitasking OS will allocate the processor(s) across the various processes and threads so that they share this resource. • Memory • Under the control of the OS programs run in a virtual address space that is mapped to a portion of the physical memory. • I/O devices • The OS controls I/O devices and manages the efficient transfer of data. • Files • As a resource information is protected and shared. • It is formatted and organized for storage and retrieval.
The OS as Resource Manager (cont.) • The OS controls and allocates resources. • As such it is the guardian of resources while acting as a facilitator that promotes the sharing of resources by various applications running on the machine. • The OS provides resource control and resource allocation while hiding the complexity of these resources.
The OS and it Ability to Evolve • New hardware and hardware upgrades: • New devices and networks can be supported by writing a new device driver. • New services: • In modern systems such as NT, system resources are modeled as objects (abstract data types that are manipulated by a special set of object services). • This modularity in the OS design allows new services to be added without a major impact on the rest of the system. • Applications are considered to be clients that obtain service through the use of an API (Application Programming Interface) that defines a type of subsystem. • In the case of NT, two subsystems are supported: Win32 and POSIX. • In addition to this a Virtual DOS Machine (VDM) supports MS-DOS and 16-bit Windows applications.
Tradeoffs • As we go through the course material you should be aware of various OS design tradeoffs, for example: • tradeoffs between performance and simplicity, • tradeoffs between execution speed and memory requirements. • Tradeoffs change as the technology improves. • Going back to 1961: • The DEC PDP-1 (Programmable Digital Processor) had • 4K RAM (18-bit words) • Cost $120,000. • But this cost was less than 5% cost of the 7094 mainframe computer.
Historical Perspective • As technology progresses some problems simply disappear. • Eg. today we do not worry about proper storage of computer cards • Other more fundamental problems stay with us (although perhaps in a different form) as the technology improves. • The next few transparencies describe some of the issues that have been around for some time now.
Historical Perspective (cont.) • Serial Processing • With the earliest machines and up to the mid ‘50s, a user was allowed hands-on contact with the machine in this “Load and Go” environment. • Cards were submitted during the load phase. • During the go phase, the program was in execution. • The system had excessive set-up and tear down time during which the machine was idle. • Set-up might require the loading of a compiler, setting up tape drives, etc. • To avoid other programmers wasting time waiting, the user “scheduled” time by means of a reservation sheet. • Scheduling of work is still an OS design issue. • We cover this is Unit 5.
Historical Perspective (cont.) • Batch Systems • To reduce set-up time, jobs with similar needs were batched and run sequentially using the same software environment (early ‘60’s). • A resident monitor handled automatic job sequencing allowing the computer to go through successive jobs in a reasonably independent fashion. • The monitor also handled interrupt processing, device drivers, and control language interpretation. • The monitor resided in low memory and a fence register pointed to the start of the area for a user program. • Any reference by the user program to an address below the address in the fence register initiated a trap (protection). • Control of a job was specified by means of job control cards placed at the beginning and end of the program deck. • Necessary hardware features included: • memory protection, a timer, interrupts, and privileged instructions. • These considerations will be important later when we discuss memory management in Unit 3.
Historical Perspective (cont.) • The Fence Register High Memory OK Fence Reg. OK Illegal Monitor Low Memory
Historical Perspective (cont.) • I/O Enhancements for Batch Systems (1955-1965) • In the processing of data, there is a very large time difference when CPU speeds are compared with I/O speeds. • This can be smoothed out, to some extent, by means of buffer areas in memory. • Off-line operations utilized a tape drive as an intermediary between card decks and memory. • Spooling is similar to off-line operations, except that a disk in used as an intermediary between memory and cards (or tape). • Note that these strategies deal with a data transfer in the so called “storage hierarchy”: tape <> disk <> memory <> data cache <> registers • This hierarchy is due to a tradeoff between cost and access speed. • Modern systems still must deal with transfers of data in the storage hierarchy (Eg. see unit 6).
Historical Perspective (cont.) • I/O enhancements also dealt with: Device independence which allows the program to reference an I/O device using a logical (in this case a symbolic) reference. • During execution, the logical I/O device reference is mapped to a physical I/O device that is free for use. • This is also referred to as I/O redirection. • Later in the course we will encounter other more modern mechanisms that provide a mapping between logical and physical designations of a resource.
Historical Perspective (cont.) • Multiprogramming (1965 - 1980) • Multiprogramming allows several programs to reside in memory at the same time (late ‘60s). • This increases CPU utilization by increasing the chances that some program will be ready for execution. • While the system does I/O for one program it can be executing another. • Requirements: • memory management and protection • CPU scheduling • device scheduling and allocation • deadlock detection • more complexity (OS360 released with over 1000 bugs). • Now, one machine can handle several I/O devices and will not be idle unless all programs are waiting for an I/O completion. • Note that this strategy also requires that I/O devices using block transfer work with DMA (Direct Memory Access) so that the CPU is not involved with I/O traffic.
Advantages of Mutiprogramming • Allows the processor to execute another program while • one program must wait for an I/O device. Run Run Wait Wait Time Run A Run B Run A Run B Wait Wait Time
Historical Perspective (cont.) • Time Sharing • In the interactive environment, a user is provided with on-line information via a terminal that is connected to a central machine. • This eliminates the control card environment. • Program debugging can be done through the use of breakpoints, this eliminating static debugging techniques that rely on post mortem memory dumps. • The basic idea is to have the system allocate the CPU to the various programs in memory in a rapid “round robin” fashion. • Each program gets a time slice, a short burst or quantum of computation. • When this is done rapidly enough, a short response time for an individual user creates the illusion that he or she is the sole user of the machine.
OS Architectures • Operating systems can be structured in a variety of ways: • the monolithic model • The OS is a set of procedures that can call one another as necessary to complete a task. • OS code runs in kernel mode; applications run in user mode. • the layered model • The OS is a set of modules that form a sequence of layers. • A procedure in one particular layer is only allowed to call procedures in a lower layer. • This restricted communication simplifies the design and aids in the debugging of the OS. • the client/server model • The OS is made up of many processes, each providing one or more services. • Each server runs in user mode and makes itself available for requests from clients (another OS component or application program).
OS Architectures: Monolithic Application . . . Application User Mode System Services Kernel Mode OS Procedures Hardware
OS Architectures: Layered Application . . . Application User Mode System Services File System Memory & I/O Device Management Kernel Mode OS Procedures (these are only a few samples) Processor Scheduling Hardware
OS Architectures: Client-Server Memory Server Client Application Process Server File Server User Mode Network Server Display Server Reply Send Send Reply Kernel Mode MicroKernel Note: Only the MicroKernel has access to all memory spaces Hardware
Advantages of the Client/Server Model • Simplification • The executive supports a group of protected subsystems that can interact with a user. • Each protected subsystem provides the API for a particular operating environment: • POSIX (Portable OS interface based on unIX) • Windows • A new API can be added without changing the executive. • Reliability • Each server runs as a separate process. • A base for distributed computing • A local server can pass a message on to a remote server for processing on behalf of a local client application.