1 / 68

Balancing Interactive Performance and Budgeted Resources in Mobile Applications

Balancing Interactive Performance and Budgeted Resources in Mobile Applications. Brett Higgins. The Most Precious Computational Resource. “ The most precious resource in a computer system is no longer its processor, memory, disk, or network, but rather human attention .”

Télécharger la présentation

Balancing Interactive Performance and Budgeted Resources in Mobile Applications

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Balancing Interactive Performance and Budgeted Resources in Mobile Applications Brett Higgins

  2. Brett Higgins

  3. Brett Higgins

  4. The Most Precious Computational Resource “The most precious resource in a computer system is no longer its processor, memory, disk, or network, but rather human attention.” – GollumGarlan et al. $$$ 3 GB (12 years ago) Brett Higgins

  5. User attention vs. energy/data $$$ 3 GB Brett Higgins

  6. Balancing these tradeoffs is difficult! • Mobile applications must: • Select the right network for the right task • Understand complex interactions between: • Type of network technology • Bandwidth/latency • Timing/scheduling of network activity • Performance • Energy/data resource usage • Cope with uncertainty in predictions Opportunity: system support Brett Higgins

  7. Spending Principles • Spend resources effectively. • Live within your means. • Use it or lose it. Brett Higgins

  8. Thesis Statement By: • Providing abstractions to simplify multinetworking, • Tailoring network use to application needs, and • Spending resourcesto purchase reductions in delay, Mobile systems can help applications significantly improve user-visible performance without exhausting limited energy & data resources. Brett Higgins

  9. Roadmap • Introduction • Application-aware multinetworking • Intentional Networking • Purchasing performance with limited resources • Informed Mobile Prefetching • Coping with cloudy predictions • Meatballs • Conclusion Brett Higgins

  10. The Challenge Diversity of networks Diversity of behavior • Email • Fetch messages • YouTube • Upload video Match traffic to available networks Brett Higgins

  11. Current approaches: two extremes All details hidden All details exposed Result: mismatched Result: hard for traffic applications Please insert packets Brett Higgins

  12. Solution: Intentional Networking • System measures available networks • Applications describe their traffic • System matches traffic to networks Brett Higgins

  13. Abstraction: Multi-socket • Multi-socket: virtual connection • Analogue: task scheduler • Measures performance of each alternative • Encapsulates transient network failure Client Server Brett Higgins

  14. Abstraction: Label • Qualitative description of network traffic • Interactivity: foreground vs. background • Analogue: task priority Client Server Brett Higgins

  15. Evaluation Results: Email • Vehicular network trace - Ypsilanti, MI 3% 7x Brett Higgins

  16. Intentional Networking: Summary • Multinetworking state of the art • All details hidden: far from optimal • All details exposed: far too complex • Solution: application-aware system support • Small amount of information from application • Significant reduction in user-visible delay • Only minimal background throughput overhead • Can build additional services atop this Brett Higgins

  17. Roadmap • Introduction • Application-aware multinetworking • Intentional Networking • Purchasing performance with limited resources • Informed Mobile Prefetching • Coping with cloudy predictions • Meatballs • Conclusion Brett Higgins

  18. Mobile networks can be slow Without prefetching With prefetching  $#&*!! No need for profanity, Brett. Data fetching is slow User: angry Fetch time hidden User: happy Brett Higgins

  19. Mobile prefetching is complex • Lots of challenges to overcome • How do I balance performance, energy, and cellular data? • Should I prefetchnow or later? • Am I prefetching data that the user actually wants? • Does my prefetchinginterfere with interactive traffic? • How do I use cellular networks efficiently? • But the potential benefits are large! Brett Higgins

  20. Who should deal with the complexity? Users? • Developers? Brett Higgins

  21. What apps end up doing • The Data Hog • The Data Miser Brett Higgins

  22. Informed Mobile Prefetching • Prefetching as a system service • Handles complexity on behalf of users/apps • Apps specify what and how to prefetch • System decides when to prefetch Application prefetch() IMP Brett Higgins

  23. Informed Mobile Prefetching • Tackles the challenges of mobile prefetching • Balances multiple resources via cost-benefit analysis • Estimates future cost, decides whether to defer • Tracks accuracy of prefetch hints • Keeps prefetching from interfering with interactive traffic • Considers batchingprefetches on cellular networks Brett Higgins

  24. Balancing multiple resources Cellular data Performance Energy $$$ 3 GB Benefit Cost Brett Higgins

  25. Balancing multiple resources • Performance (user time saved) • Future demand fetch time • Network bandwidth/latency • Battery energy (spend or save) • Energy spent sending/receiving data • Network bandwidth/latency • Wireless radio power models (powertutor.org) • Cellular data (spend or save) • Monthly allotment • Straightforward to track $$$ 3 GB Brett Higgins

  26. Weighing benefit and cost Joules Bytes • IMP maintains exchange rates • One value for each resource • Expresses importance of resource • Combine costs in common currency • Meaningful comparison to benefit • Adjust over time via feedback • Goal-directed adaptation Seconds Seconds Brett Higgins

  27. Users don’t always want what apps ask for • Some messages may not be read • Low-priority • Spam • Should consider the accuracy of hints • Don’t require the app to specify it • Just learn it through the API • App tells IMP when it uses data • (or decides not to use the data) • IMP tracks accuracy over time Brett Higgins

  28. Evaluation Results: Email 5 4 3 2 1 0 3G data usage Average email fetch time Energy usage 8 7 6 5 4 3 2 1 0 500 400 300 200 100 0 IMP meets all resource goals Energy (J) 2-8x Optimal (100% hits) 3G data (MB) Time (seconds) Less energy than all others (including WiFi-only!) ~300ms 2x Only WiFi-only used less 3G data (but…) Budget marker Brett Higgins

  29. IMP: Summary • Mobile prefetching is a complex decision • Applications choose simple, suboptimal strategies • Powerful mechanism: purchase performance w/ resources • Prefetching as a system service • Application provides needed information • System manages tradeoffs (spending vs. performance) • Significant reduction in user-visible delay • Meets specified resource budgets • What about other performance/resource tradeoffs? Brett Higgins

  30. Roadmap • Introduction • Application-aware multinetworking • Purchasing performance with limited resources • Coping with cloudy predictions • Motivation • Uncertainty-aware decision-making (Meatballs) • Decision methods • Reevaluation from new information • Evaluation • Conclusion Brett Higgins

  31. Mobile Apps & Prediction • Mobile apps rely on predictions of: • Networks (bandwidth, latency, availability) • Computation time • These predictions are often wrong • Networks are highly variable • Load changes quickly • Wrong prediction causes user-visible delay • But mobile apps treat them as oracles! Brett Higgins

  32. Example: code offload Likelihood of 100 sec response time 0.1% 0.0001% Brett Higgins

  33. Example: code offload Elapsed time Expected response time Expected response time (redundant) 0 sec 10.09 sec 10.00009 sec Brett Higgins

  34. Example: code offload Elapsed time Expected response time Expected response time (redundant) 9 sec 1.09 sec 1.00909 sec Brett Higgins

  35. Example: code offload Elapsed time Expected response time Expected response time (redundant) 11 sec 89 sec 10.09 sec Conditional expectation Brett Higgins

  36. Spending for a good cause • It’s okay to spend extra resources! • …if we think it will benefit the user. • Spend resources to cope with uncertainty • …via redundant operation • Quantify uncertainty, use it to make decisions • Benefit (time saved) • Cost (energy + data) • ✖ Brett Higgins

  37. Meatballs • A library for uncertainty-aware decision-making • Application specifies its strategies • Different means of accomplishing a single task • Functions to estimate time, energy, cellular data usage • Application provides predictions, measurements • Allows library to capture error & quantify uncertainty • Library helps application choose the best strategy • Hides complexity of decision mechanism • Balances cost & benefit of redundancy Brett Higgins

  38. Roadmap • Introduction • Application-aware multinetworking • Purchasing performance with limited resources • Coping with cloudy predictions • Motivation • Uncertainty-aware decision-making (Meatballs) • Decision methods • Reevaluation from new information • Evaluation • Conclusion Brett Higgins

  39. Deciding to operate redundantly • Benefit of redundancy: time savings • Cost of redundancy: additional resources • Benefit and cost are expectations • Consider predictions as distributions, not spot values • Approaches: • Empirical error tracking • Error bounds • Bayesian estimation Brett Higgins

  40. Empirical error tracking • Compute error upon new measurement • Weighted sum over joint error distribution • For multiple networks: • Time: min across all networks • Cost: sum across all networks observed predicted ε(BP1) ε(BP2) error = .5 .6 .7 .8 .9 1 1.1 1.2 1.3 1.4 .5 .6 .7 .8 .9 1 1.1 1.2 1.3 1.4 Brett Higgins

  41. Error bounds • Range + p(next value is in range) • Student’s-tprediction interval • Calculate bound on time savings of redundancy Network bandwidth Time to send 10Mb 9876543210 9876543210 Predictor says: Use network 2 Bandwidth (Mbps) Time (seconds) Max time savings from redundancy Brett Higgins BP1 BP2 T1 T2

  42. Error bounds • Calculate bound on net gain of redundancy max(benefit) – min(cost) = max(net gain) • Use redundancy if max(net gain) > 0 Energy to send 10Mb Min energy w/ redundancy 4 3 2 1 0 Network bandwidth Time to send 10Mb 9876543210 9876543210 Predictor says: Use network 2 Energy (J) Bandwidth (Mbps) Time (seconds) Max time savings from redundancy Eboth E1 E2 Brett Higgins BP1 BP2 T1 T2

  43. Bayesian Estimation • Basic idea: • Given a priorbelief about the world, • and some new evidence, • update our beliefs to account for the evidence, • AKA obtaining posterior distribution • using the likelihoodof the evidence • Via Bayes’ Theorem: posterior = likelihood * prior p(evidence) Normalization factor; ensures posterior sums to 1 Brett Higgins

  44. Bayesian Estimation • Example: “will it rain tomorrow?” • Prior: historical rain frequency • Evidence: weather forecast (simple yes/no) • Posterior: believed rain probability given forecast • Likelihood: • When it will rain, how often has the forecast agreed? • When it won’t rain, how often has the forecast agreed? • Via Bayes’ Theorem: posterior = likelihood * prior p(evidence) Normalization factor; ensures posterior sums to 1 Brett Higgins

  45. Bayesian Estimation • Applied to Intentional Networking: • Prior: bandwidth measurements • Evidence: bandwidth prediction + implied decision • Posterior: new belief about bandwidths • Likelihood: • When network 1 wins, how often has the prediction agreed? • When network 2 wins, how often has the prediction agreed? • Via Bayes’ Theorem: posterior = likelihood * prior p(evidence) Normalization factor; ensures posterior sums to 1 Brett Higgins

  46. Decision methods: summary • Empirical error tracking • Captures error distribution accurately • Computationally intensive • Error bounds • Computationally cheap • Prone to overestimating uncertainty • Bayesian • Computationally cheap(er than brute-force) • Appears more reliant on history Brett Higgins

  47. Reevaluation: conditional distributions Decision Elapsed Time One server Two servers 0 10 … Brett Higgins

  48. Evaluation: methodology • Network trace replay, energy & data measurement • Same as IMP • Metric: weighted cost function • time + cenergy * energy + cdata * data Brett Higgins

  49. Evaluation: IntNW, walking trace Simple Meatballs Low-resource strategies improve Error bounds leans towards redundancy 2x Weighted cost (norm.) 24% Meatballs matches the best strategy Brett Higgins

  50. Evaluation: PocketSphinx, server load Simple Meatballs Error bounds leans towards redundancy 23% Weighted cost (norm.) Meatballs matches the best strategy Brett Higgins

More Related