1 / 31

High Performance Computing at Mercury Marine

High Performance Computing at Mercury Marine. Arden Anderson Mercury Marine Product Development and Engineering. Outline. About Mercury Marine Engineering simulation capabilities Progression of computing systems HPC system cost and justification Summary.

aliya
Télécharger la présentation

High Performance Computing at Mercury Marine

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. High Performance Computing at Mercury Marine Arden Anderson Mercury Marine Product Development and Engineering

  2. Outline • About Mercury Marine • Engineering simulation capabilities • Progression of computing systems • HPC system cost and justification • Summary

  3. Mercury Marine began as the Kiekhaefer Corp. in 1939 Founded by E. Carl Kiekhaefer Mercury acquired by Brunswick Corporation in 1961 Leader in active recreation: marine engines, boating, bowling, billiards, and fitness Mercury’s 1st Patent Mercury Marine Founded in Cedarburg, WI in 1939 • Today, USA’s Only Outboard Manufacturer • Employs 4,200 People Worldwide • Fond du Lac, WI campus includes • Corporate Offices • Technology Center, R&D Offices • Outboard Manufacturing (Casting, Machining, Assembly to Distribution) 3

  4. The Most Comprehensive Product Offering In Recreational Marine Outboard Engines (2.5 hp to 350 hp) Sterndrive Engines (135 hp to 1250 hp) All new or updated in last 5 years All updated to new emissions standard in last year Props / Rigging / P&A Land ‘N’ Sea / Attwood Diversified, Quality Products, Connected to Parent Corporation 4

  5. Outline • About Mercury Marine • Engineering simulation capabilities • Progression of computing systems • HPC system cost and justification • Summary

  6. Poll Question • How many compute cores do you use for your largest jobs? • Less than 4 • 4-16 • 17-64 • More than 64

  7. Standard FEA Fatigue & Hardware Correlation Non-Linear Gaskets System Assemblies with Contact Sub-Modeling

  8. Explicit FEA • System level submerged object impact • Method development was presented at the 2008 Abaqus Users Conference

  9. CFD Transient Internal Flow External Flow Two Phase Flow Cavitation onset Vessel drag, heave, and pitch Flow distribution correlated to hardware Moving mesh propeller

  10. Heat Transfer Enclosure Air Flow & Component Temperatures Conjugate Heat Transfer for Temperature Distribution & Thermal Fatigue

  11. Overview of Mercury Marine Design Analysis Group Experience Simulation Methods • Aerospace • Automotive and Off-Highway • Composites • Dynamic Impact and Weapons • Gas and Diesel Engine • Hybrid • Marine • Structural Analysis • Implicit Finite Element • Explicit Finite Element • Dynamic Analysis • Fluid Dynamics • Heat Transfer • Engine Performance Analyst Workstations HPC System • Pre and post processing • Dual Xeon 5160 (4 core), 3.0 GHz • Up to 16 GB RAM • 64 bit Windows XP • FEA and CFD solvers • 80 core (10 nodes x 8 core/node) • Up to 40 GB RAM per node • InfiniBand switch • Windows HPC Server 2008

  12. Poll Question • How many compute cores do you use for your largest jobs? • Less than 4 • 4-16 • 17-64 • More than 64 This slide is a placeholder for coming back to poll question responses

  13. Outline • About Mercury Marine • Engineering simulation capabilities • Progression of computing systems • HPC system cost and justification • Summary

  14. Introduce Windows HPC Server in 2009… Evolution of Computing Systems at Mercury Marine 2004 2005 2007 • Updated processing capabilities with Linux compute server • 4 CPU Itanium, 32GB RAM for FEA • 6 CPU Opteron for CFD • $125k server • Increased model size with larger memory • Parallel processing for FEA & CFD • ~Same number of processors as previous system with large increases in speed and capability • Updated pre-post (2004 PC’s) with 2x2 core Linux workstations • 3.0 GHz • 4-16 GB RAM • More desktop memory for pre-processing • Increased computing by clustering the new pre-post machines • Small & mid-sized Standard FEA on pre-post machines using multi-cpu’s • Pre and post processing on Windows PC, 2 GB RAM • Computing on HP Unix Workstation • Single CPU • 4-8GB RAM • ~$200k for 10 boxes • Memory limitations on pre-post and limited model size • Minimal parallel-processing (CFD only)

  15. Emphasis on minimizing analysis time over maximizing computer & software utilization Limited availability to server room Linux support Cost Conscious Easy to implement Machine would run only Abaqus Reduce large run times by 2.5x or more Ability to handle larger future runs System needs to be supported by in-house IT support Re-evaluate software versus hardware balance 2009 HPC Decision INFLUENCING FACTORS GOALS

  16. Why Windows HPC Server? • Limited access to Unix/Linux support group • Unix/Linux support group has database expertise – little experience in high performance computing • HPC projects lower priority than company database projects • Larger Windows platform support group • Benchmarking showed competitive run times • Intuitive use and easy management • Job Scheduler • Add Node Wizard • Heat Map

  17. Mercury HPC System Detail, 2009 • Windows Server 2008 HPC Edition • 32 Core Server + Head Node • 4 Compute nodes with 8 cores per node • 40 GB/Node – 160 GB total Head Node X3650 Processors: 2 x E5440 Xeon Quad 2.8GHz/12L2/1333bus Memory: 16 GB 667 MHz Hard Drives: 6 x 1.0 TB SATA in Raid 10 4 Compute Nodes X3450 GigE switch Processors: 2 x E5472 Xeon Quad 3.0Ghz/12L2/1600bus Memory: 40 GB 800MHz Drives: 2 x 750Gb SATA RAID 0

  18. Outline • About Mercury Marine • Engineering simulation capabilities • Progression of computing systems • HPC system cost and justification • Summary

  19. Justification • Request from management to reduce run turn around time – some run times 1 - 2 weeks as runs have become more detailed and complex • Quicker feedback to avoid late tooling changes • Need to minimize manpower down time • Large software costs – need to maximize software investment

  20. Budget Breakdown 2009 2010 • Computers are small portion of budget • Budget skewed towards software over hardware • Rebalancing hardware/software in 2009 slightly shifted this breakdown

  21. Abaqus Token Balancing • Previous Abaqus token count was high to enable multiple simultaneous jobs on smaller machines • Re-balance tokens from several small jobs to fewer large jobs

  22. HPC System Costs (2009) • System Buy Price with OS: $37,000 • 2 Year Lease Price: $16,000 per year • Software re-scaled to match new system • Incremental cost: $7,300 per year

  23. Historic Productivity Increases • Continual improvement in productivity • Large increases in analysis complexity

  24. Abaqus S4b Implicit Benchmark • Cylinder Head Bolt-up • 5,000,000 DOF • 32 Gb Memory Run Time in Hours

  25. Mercury “Real World” Standard FEA • Block + Head + Bedplate • 8,800,000 DOF • 55Gb Memory • Preload + Thermal + Reciprocating Forces * Picture of Abaqus benchmark S4b Run Time in Hours (Days)

  26. Mercury “Real World” Explicit FEA • Outboard Impact • 600,000 Elements • dt = 3.5e-8s for 0.03s (857k increments) Run Time in Hours

  27. Outline • About Mercury Marine • Engineering simulation capabilities • Progression of computing systems • HPC system cost and justification • Summary

  28. Summary • Mercury HPC has evolved over the last 5 years • Each incremental step has lead to greater throughput and increased capabilities that have allowed us to better meet the demands of a fast paced product development cycle • Our latest HPC server has delivered improvements in run times as high as 8x at a very affordable price • We expect further gains in meshing productivity as we re-size runs to the new computing system

  29. Progress Continues: Mercury HPC System Detail, 2010 Updates • Windows Server 2008 HPC Edition • Add 48 cores to existing server (combined total of 80 cores) • 6 Compute nodes with 8 cores per node • 24 GB/Node • Now running FEA and CFD on HPC system (~70/30 split) Head Node X3650 Processors: 2 x E5440 Xeon Quad 2.8GHz/12L2/1333bus Memory: 16 GB 667 MHz Hard Drives: 6 x 1.0 TB SATA in Raid 10 4 Compute Nodes X3450 6 Compute Nodes X3550 InfiniBand switch Processors: 2 x E5472 Xeon Quad 3.0Ghz/12L2/1600bus Memory: 40 GB 800MHz per node Drives: 2 x 750 GB SATA RAID 0 Processors: 2 x E5570 Xeon Quad-core, 3.0Ghz Memory: 24 GB RAM per node Drives: 2 x 500 GB SATA RAID 0

  30. Thank You. Questions?

  31. Contact Info and Links • Arden Anderson • arden.anderson@mercmarine.com • Microsoft HPC Server Case Study • http://www.microsoft.com/casestudies/Windows-HPC-Server-2008/Mercury-Marine/Manufacturer-Adopts-Windows-Server-Based-Cluster-for-Cost-Savings-Improved-Designs/4000008161 • Crash Prediction for Marine Engine Systems at 2008 Abaqus Users Conference • Available by searching conference archives for Mercury Marine: http://www.simulia.com/events/search-ucp.html

More Related