1 / 40

Processes and operating systems

Processes and operating systems. Scheduling policies: RMS; EDF. Scheduling modeling assumptions. Interprocess communication. Power management. Metrics. How do we evaluate a scheduling policy: Ability to satisfy all deadlines. CPU utilization---percentage of time devoted to useful work.

thyra
Télécharger la présentation

Processes and operating systems

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Processes and operating systems • Scheduling policies: • RMS; • EDF. • Scheduling modeling assumptions. • Interprocess communication. • Power management. Overheads for Computers as Components

  2. Metrics • How do we evaluate a scheduling policy: • Ability to satisfy all deadlines. • CPU utilization---percentage of time devoted to useful work. • Scheduling overhead---time required to make scheduling decision. Overheads for Computers as Components

  3. Rate monotonic scheduling • RMS (Liu and Layland): widely-used, analyzable scheduling policy. • Analysis is known as Rate Monotonic Analysis (RMA). Overheads for Computers as Components

  4. RMA model • All process run on single CPU. • Zero context switch time. • No data dependencies between processes. • Process execution time is constant. • Deadline is at end of period. • Highest-priority ready process runs. Overheads for Computers as Components

  5. Process parameters • Ti is computation time of process i; ti is period of process i. period ti Pi computation time Ti Overheads for Computers as Components

  6. Rate-monotonic analysis • Response time: time required to finish process. • Critical instant: scheduling state that gives worst response time. • Critical instant occurs when all higher-priority processes are ready to execute. Overheads for Computers as Components

  7. interfering processes P1 P1 P1 P1 P2 P2 P3 Critical instant P1 P2 P3 critical instant P4 Overheads for Computers as Components

  8. RMS priorities • Optimal (fixed) priority assignment: • shortest-period process gets highest priority; • priority inversely proportional to period; • break ties arbitrarily. • No fixed-priority scheme does better. Overheads for Computers as Components

  9. RMS example P2 period P2 P1 period P1 P1 P1 0 5 10 time Overheads for Computers as Components

  10. RMS CPU utilization • Utilization for n processes is • Si Ti / ti • As number of tasks approaches infinity, maximum utilization approaches 69%. Overheads for Computers as Components

  11. RMS CPU utilization, cont’d. • RMS cannot asymptotically guarantee use 100% of CPU, even with zero context switch overhead. • Must keep idle cycles available to handle worst-case scenario. • However, RMS guarantees all processes will always meet their deadlines. Overheads for Computers as Components

  12. RMS implementation • Efficient implementation: • scan processes; • choose highest-priority active process. Overheads for Computers as Components

  13. Earliest-deadline-first scheduling • EDF: dynamic priority scheduling scheme. • Process closest to its deadline has highest priority. • Requires recalculating processes at every timer interrupt. Overheads for Computers as Components

  14. EDF example P1 P2 Overheads for Computers as Components

  15. EDF analysis • EDF can use 100% of CPU. • But EDF may miss a deadline. Overheads for Computers as Components

  16. EDF implementation • On each timer interrupt: • compute time to deadline; • choose process closest to deadline. • Generally considered too expensive to use in practice. Overheads for Computers as Components

  17. Fixing scheduling problems • What if your set of processes is unschedulable? • Change deadlines in requirements. • Reduce execution times of processes. • Get a faster CPU. Overheads for Computers as Components

  18. Priority inversion • Priority inversion: low-priority process keeps high-priority process from running. • Improper use of system resources can cause scheduling problems: • Low-priority process grabs I/O device. • High-priority device needs I/O device, but can’t get it until low-priority process is done. • Can cause deadlock. Overheads for Computers as Components

  19. Solving priority inversion • Give priorities to system resources. • Have process inherit the priority of a resource that it requests. • Low-priority process inherits priority of device if higher. Overheads for Computers as Components

  20. Data dependencies allow us to improve utilization. Restrict combination of processes that can run simultaneously. P1 and P2 can’t run simultaneously. Data dependencies P1 P2 Overheads for Computers as Components

  21. Context-switching time • Non-zero context switch time can push limits of a tight schedule. • Hard to calculate effects---depends on order of context switches. • In practice, OS context switch overhead is small. Overheads for Computers as Components

  22. What about interrupts? P1 • Interrupts take time away from processes. • Perform minimum work possible in the interrupt handler. OS P2 intr OS P3 Overheads for Computers as Components

  23. Device processing structure • Interrupt service routine (ISR) performs minimal I/O. • Get register values, put register values. • Interrupt service process/thread performs most of device function. Overheads for Computers as Components

  24. POSIX scheduling policies • SCHED_FIFO: RMS • SCHED_RR: round-robin • within a priority level, processes are time-sliced in round-robin fashion • SCHED_OTHER: undefined scheduling policy used to mix non-real-time and real-time processes. Overheads for Computers as Components

  25. Interprocess communication • OS provides interprocess communication mechanisms: • various efficiencies; • communication power. Overheads for Computers as Components

  26. Signals • A Unix mechanism for simple communication between processes. • Analogous to an interrupt---forces execution of a process at a given location. • But a signal is caused by one process with a function call. • No data---can only pass type of signal. Overheads for Computers as Components

  27. POSIX signals • Must declare a signal handler for the process using sigaction(). • Handler is called when signal is received. • A signal can be sent with sigqueue(): sigqueue(destpid,SIGRTMAX-1,sval) Overheads for Computers as Components

  28. POSIX signal types • SIGABRT: abort • SIGTERM: terminate process • SIGFPE: floating point exception • SIGILL: illegal instruction • SIGKILL: unavoidable process termination • SIGUSR1, SIGUSR2: user defined Overheads for Computers as Components

  29. Signals in UML • More general than Unix signal---may carry arbitrary data: someClass <<signal>> aSig <<send>> sigbehavior() p : integer Overheads for Computers as Components

  30. POSIX shared memory • POSIX supports counting semaphores with _POSIX_SEMAPHORES option. • Semaphore with N resources will not block until N processes hold the semaphore. • Semaphores are given name: • /sem1 • P() is sem_wait(), V() is sem_post(). Overheads for Computers as Components

  31. POSIX message-based communication • Unix pipe supports messages between processes. • Parent process uses pipe() to create a pipe. • Pipe is created before child is created so that pipe ID can be passed to child. Overheads for Computers as Components

  32. POSIX pipe example /* create the pipe */ if (pipe(pipe_ends) < 0) { perror(“pipe”); break; } /* create the process */ childid = fork(); if (childid == 0) { /* child reads from pipe_ends[1] */ childargs[0] = pipe_ends[1]; execv(“mychild”,childargs); perror(“execv”); exit(1); } else { /* parent writes to pipe_ends[0] */ … } Overheads for Computers as Components

  33. Evaluating performance • May want to test: • context switch time assumptions; • scheduling policy. • Can use OS simulator to exercise process set, trace system behavior. Overheads for Computers as Components

  34. Processes and caches • Processes can cause additional caching problems. • Even if individual processes are well-behaved, processes may interfere with each other. • Worst-case execution time with bad behavior is usually much worse than execution time with good cache behavior. Overheads for Computers as Components

  35. Power optimization • Power management: determining how system resources are scheduled/used to control power consumption. • OS can manage for power just as it manages for time. • OS reduces power by shutting down units. • May have partial shutdown modes. Overheads for Computers as Components

  36. Power management and performance • Power management and performance are often at odds. • Entering power-down mode consumes • energy, • time. • Leaving power-down mode consumes • energy, • time. Overheads for Computers as Components

  37. Simple power management policies • Request-driven: power up once request is received. Adds delay to response. • Predictive shutdown: try to predict how long you have before next request. • May start up in advance of request in anticipation of a new request. • If you predict wrong, you will incur additional delay while starting up. Overheads for Computers as Components

  38. Probabilistic shutdown • Assume service requests are probabilistic. • Optimize expected values: • power consumption; • response time. • Simple probabilistic: shut down after time Ton, turn back on after waiting for Toff. Overheads for Computers as Components

  39. Advanced Configuration and Power Interface • ACPI: open standard for power management services. applications device drivers OS kernel power management ACPI BIOS Hardware platform Overheads for Computers as Components

  40. ACPI global power states • G3: mechanical off • G2: soft off • S1: low wake-up latency with no loss of context • S2: low latency with loss of CPU/cache state • S3: low latency with loss of all state except memory • S4: lowest-power state with all devices off • G1: sleeping state • G0: working state Overheads for Computers as Components

More Related