80 likes | 190 Vues
CPU Monitoring and Prediction. Urs Hengartner Sonesh Surana Yinglian Xie Mentor: Dushyanth Narayanan. Overview. Applications demand low latency even in case of varying CPU availability Multi-fidelity applications can adapt their fidelities to changes in CPU availability
E N D
CPU Monitoring and Prediction Urs Hengartner Sonesh Surana Yinglian Xie Mentor: DushyanthNarayanan
Overview • Applications demand low latency even in case of varying CPU availability • Multi-fidelity applications can adapt their fidelities to changes in CPU availability • Maintain latency bound for CPU-bound processes facing varying CPU availability • monitor CPU availability • predict future CPU availability • predict latency as a function of fidelity (Odyssey) • use this function to find the right application fidelity for some latency constraint (Odyssey)
Algorithm • Monitoring: periodic measurement of • number of runnable processes • CPU ticks consumed by multi-fidelity application • Prediction of CPU availabilityn: number of runnable processes (smoothed)f: fraction of CPU ticks consumed by multi-fidelity application (smoothed)
Experiments • Predict CPU availability in next second/next ten seconds • Background load consisting of CPU-intensive/make processes • 50 experiments per data point
Short-Term Predictions • Why are short-term predictions less accurate? • Artifact of Linux scheduler • 100 ticks per second, per process time quantum of 20 ticks • for two processes A and B:Process A gets 60 ticks per secondProcess B gets 40 ticks per second • Impossible to make accurate short-term prediction unless • scheduler is simulated at user-level • kernel provides more scheduling information
Make Background Load • Why are predictions for make background load less accurate? • Artifact of Linux scheduler • make process gets added to run queue, but is not immediately scheduled • run queue always contains make process(es) • however, make process(es) always consumes less than its share • makes our formula under-predict CPU availability • Our formula is not powerful enough
Conclusions • CPU prediction is feasible • Predictions are difficult for • short prediction intervals • I/O-bound background processes • Future Work • what kind of kernel-level support? • more sophisticated prediction formula (e.g., techniques from machine learning)
Demo • Interactive rendering application • Multiple levels of fidelity based on number of rendered polygons • Background load of three CPU-intensive processes and desired latency of two seconds • Scenario 1: Prediction always returns 100% CPU availability • Scenario 2: Prediction returns CPU availability based on our formula