1 / 21

Design Considerations for Network Processors Operating Systems

Design Considerations for Network Processors Operating Systems. Tilman Wolf 1 , Ning Weng 2 and Chia-Hui Tai 1 1 University of Massachusetts Amherst 2 Southern Illinois University Carbondale. Network Processor Systems. System outline: Network Processor Operating System (NPOS)

deva
Télécharger la présentation

Design Considerations for Network Processors Operating Systems

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Design Considerations for Network Processors Operating Systems Tilman Wolf1, Ning Weng2 and Chia-Hui Tai1 1University of Massachusetts Amherst 2Southern Illinois University Carbondale ANCS 2005

  2. Network Processor Systems • System outline: • Network Processor Operating System (NPOS) • Manages multicore embedded system • Considers workload requirements and network traffic ANCS 2005

  3. NPOS Characteristics • Network processing very dynamic process • Many different network services and protocols • Processing requirements depend on network traffic • New algorithm for existing applications, e.g., flow classification • Managing network processors is difficult • Multiple embedded processor cores • Limited memory and processing resources • Tight interaction between components • Processing elements cannot implement complex OS • NPOS requirements: • Lightweight • Consider multiprocessor nature • Adaptive to changes in workload ANCS 2005

  4. Comparison • Major differences to workstation/server OS • Separation between control and data path • Limited/no user interactions • Highly regular and “simple” applications • Processing dominates resource management • No separation of user-space and kernel-space • Differences to others NP runtime environments • Others: NEPAL, Teja, Shangri-La • Multiple packet processing applications • Run-time remapping • Considers parallelism within application • Not limited to certain hardware ANCS 2005

  5. Outline • Introduction • NPOS architecture • Our approach • Design parameters • Application workload • Partitioning and mapping • Traffic characterization • Variation in processing demand • Results and tradeoffs • NPOS parameters • Quantitative tradeoffs • Example NPOS scenarios ANCS 2005

  6. Applications Multiprocessor requires application partitioning Mapping during runtime Network traffic Determines workload Analysis of traffic required during runtime Dynamic aspects Traffic determines application mix Complete or partial adaptation necessary Architecture of NPOS ANCS 2005

  7. Design Question • How finely should applications be partitioned? • How good does the mapping approximation need to be? • Should we spend more time on better mapping or should we remap more frequently? • How often should the NPOS remap? • How badly does the system perform if we predict the workload incorrectly? • Should we remap completely or should we remap partially? ANCS 2005

  8. NPOS Parameters • Application partitioning • Partitioning granularity • Traffic characterization • Sample size • Batch size • Single parameter: traffic variation • Application mapping • Mapping effort • Mapping quality • Workload adaptation • Frequency • Complete or partial reallocation ANCS 2005

  9. Application Partition • Grouping of instruction blocks • Dependencies between blocks • Represented by directed acyclic graph • Annotation gives information on processing and dependencies • Annotated Directed Acyclic Graph (ADAG) • ADAG generation • Automatic derivation from runtime trace • Balance of node size important • NP-complete problem • Heuristic approximation • Presented at NP3 • Choice of granularity in NPOS • monolithic • very fine-grained ADAG • Balanced ADAG ANCS 2005

  10. Workload Mapping • Process of placing ADAGs on network processor • Baseline system: • Analytic performance model: not discussed here ANCS 2005

  11. Mapping Algorithm • Mapping problem is NP complete • Need heuristic approximation • Key assumption: • Quality of mapping depends on mapping effort • Randomized mapping • Randomly place ADAG • Evaluate performance • Keep best solution and retry • Increasing mappingeffort yields incrementally better results ANCS 2005

  12. Application Partitioning Granularity • What level of granularity is best? • Monolithic (one single node): does not exploit parallelism • Very fine-grained: requires excessive mapping effort ANCS 2005

  13. Traffic Characterization • We can find a configuration for one particular workload • Workload depends on traffic, which changes dynamically • Need to adapt to traffic • Cannot adapt for every packet • Need to sample traffic and find configuration for longer time • Traffic models for NPOS: • Static: cannot adapt, generally not suitable • Batch: batch of packet buffered, perfect prediction, long delay • Predictive batch: sampling of traffic, prediction for entire batch • Takes advantage of temporal locality of network traffic • Key NPOS parameters: • Batch size: number of packets processed using one workload allocation • Sample size: number of packets used to predict batch workload • Impact Metric: traffic variation ANCS 2005

  14. TrafficVariation • Measure for traffic variation v • Metric for how different traffic is from what we expected • ei,j(a): estimated number of packets for application a • pi,j(a): the actual number of packets for application a • Workload allocated according to sample of size l • What fraction of packets in batch of size b cannot be processed? • Ideal: v=0 all packets match with workload allocation • Figure: • 4,235,403 packets, 175 categories of applications • Sample size l=100, batch size b=10,000 ANCS 2005

  15. Sample and Batch Size • Bigger sample reduces v • Better prediction • Bigger batch reduces v • Only if sample also increases • Smoothes over variation • NPOS considerations • Limitations on size of sample • Need to buffer packets • Need time to compute mapping • Limitations on batch size • Larger batches predict further ahead • More variation with larger batches • Need to remap during runtime l = 100 ANCS 2005

  16. Optimal Mapping Frequency • How often should we run mapping process? • Need to find “sweet spot” • Too frequently: • Low mapping quality • Too infrequently: • Traffic changes during • batch • Traffic variation reduces performance • Depends on batch size • For our setup: • Optimal mapping frequency every 20-100 packets around • Depends on relative speed of processor that performs mapping ANCS 2005

  17. Partial Mapping • Traffic changes workload incrementally • Can we adapt by partial mapping? • Remove unnecessary ADAG • Map new ADAG onto existing mapping • NPOS consideration: • What is the long-term performance impact? • How much can we change? • Repeated partial mapping degrades performance • Stabilizes at some suboptimal state • Mapping granularity makes minor difference • Complete mapping is occasionally necessary for peak performance ANCS 2005

  18. Design Scenarios • Tradeoffs between different NPOS scenarios • Scenario I: static configuration • Simple system • No flexibility at runtime • Performance degradation under traffic variations • Scenario II: predetermined configuration • Offline mapping of multiple static workloads • Limited adaptability during runtime • High quality mapping results • Scenario III: fully dynamic configuration • Complete adaptability to any workload during runtime • Limited mapping quality • Lower overprovision overhead • Results of our work provide quantitative tradeoffs ANCS 2005

  19. Conclusion • Network Processor Operating System • Application workload • Traffic characterization • Design parameters • Quantitative tradeoffs • Next steps • Integrate memory management • Consider different traffic prediction algorithms • Develop prototype system on IXP platform ANCS 2005

  20. References [1] Memik, G., and Mangione-smith, W. H. NEPAL: A framework for efficiently structuring applications for network processors. In Proc. of Second Network Processor Workshop (NP-2) in conjunction with Ninth International Symposium on HPCA, Feb, 2003. [2] TEJA TECHNOLOGIES. TejaNP Datasheet, 2003. http://www.teja.com. [3] Kokku, R., Rich, T., Kunze, A., Mudigonda, J., Jason, J., and Vin, H. A case for run-time adaptation in packet processing systems. In Proc. of the 2nd Workshop on Hot Topics in Networks , Nov. 2003. [4] Ramaswamy, R., Weng, N., and Wolf, T. Application analysis and resource mapping for heterogeneous network processor architectures. In Proc. of Third Workshop on NP-3, Feb, 2004. [5] Weng, N. and Wolf, T., Pipelining vs. multiprocessors - choosing the right network processor system topology, in Proc. of Advanced Networking and Communications Hardware Workshop, June, 2004. [6] Weng, N., and Wolf, T. Profiling and mapping of parallel workloads on network processors. In Proc. of The 20th Annual ACM Symposium on Applied Computing, March, 2005. ANCS 2005

  21. Questions? ANCS 2005

More Related