1 / 8

Urs Hengartner Sonesh Surana Yinglian Xie Mentor: Dushyanth Narayanan

CPU Monitoring and Prediction. Urs Hengartner Sonesh Surana Yinglian Xie Mentor: Dushyanth Narayanan. Overview. Applications demand low latency even in case of varying CPU availability Multi-fidelity applications can adapt their fidelities to changes in CPU availability

obert
Télécharger la présentation

Urs Hengartner Sonesh Surana Yinglian Xie Mentor: Dushyanth Narayanan

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CPU Monitoring and Prediction Urs Hengartner Sonesh Surana Yinglian Xie Mentor: DushyanthNarayanan

  2. Overview • Applications demand low latency even in case of varying CPU availability • Multi-fidelity applications can adapt their fidelities to changes in CPU availability • Maintain latency bound for CPU-bound processes facing varying CPU availability • monitor CPU availability • predict future CPU availability • predict latency as a function of fidelity (Odyssey) • use this function to find the right application fidelity for some latency constraint (Odyssey)

  3. Algorithm • Monitoring: periodic measurement of • number of runnable processes • CPU ticks consumed by multi-fidelity application • Prediction of CPU availabilityn: number of runnable processes (smoothed)f: fraction of CPU ticks consumed by multi-fidelity application (smoothed)

  4. Experiments • Predict CPU availability in next second/next ten seconds • Background load consisting of CPU-intensive/make processes • 50 experiments per data point

  5. Short-Term Predictions • Why are short-term predictions less accurate? • Artifact of Linux scheduler • 100 ticks per second, per process time quantum of 20 ticks • for two processes A and B:Process A gets 60 ticks per secondProcess B gets 40 ticks per second • Impossible to make accurate short-term prediction unless • scheduler is simulated at user-level • kernel provides more scheduling information

  6. Make Background Load • Why are predictions for make background load less accurate? • Artifact of Linux scheduler • make process gets added to run queue, but is not immediately scheduled • run queue always contains make process(es) • however, make process(es) always consumes less than its share • makes our formula under-predict CPU availability • Our formula is not powerful enough

  7. Conclusions • CPU prediction is feasible • Predictions are difficult for • short prediction intervals • I/O-bound background processes • Future Work • what kind of kernel-level support? • more sophisticated prediction formula (e.g., techniques from machine learning)

  8. Demo • Interactive rendering application • Multiple levels of fidelity based on number of rendered polygons • Background load of three CPU-intensive processes and desired latency of two seconds • Scenario 1: Prediction always returns 100% CPU availability • Scenario 2: Prediction returns CPU availability based on our formula

More Related