1 / 48

HP Discover 2012

HP Discover 2012. TB#3258: The benefits and right practices of 10GbE networking with VMware vSphere 5 June 6, 2012 2:45PM. Agenda. The Transition… What’s new with VMware vSphere 5 The design considerations Deployment scenarios Summary. The Transition.

pepper
Télécharger la présentation

HP Discover 2012

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. HP Discover 2012 TB#3258: The benefits and right practices of 10GbE networking with VMware vSphere 5 June 6, 2012 2:45PM

  2. Agenda • The Transition… • What’s new with VMware vSphere 5 • The design considerations • Deployment scenarios • Summary

  3. The Transition

  4. A website focused on technical content • A consortium of technical whitepapers of actual work done in Emulex Labs or Partner Labs • A resource for how-to content, blogs, application notes, technical videos and technical webcast recordings • Recently selected as one of the Best Technical websites by Network Products Guide – May 2012

  5. Ethernet Physical Media Late 2000’s Mid 1980’s Mid 1990’s Early 2000’s 10Mb 100Mb 1Gb 10Gb UTP Cat 3 UTP Cat 5 UTP Cat 5SFP Fiber SFP+ Cu, Fiber X2, Cat 6/7 Power(each side) TransceiverLatency (link) Technology Cable Distance SFP+ Direct Attach Twinax ~0.1W ~0.1ms < 10m SFP+ SRshort reach MM 62.5mmMM 50mm 82m300m 1W ~0

  6. 10GbE PHY Standards *FDDI-grade MMF L – 1310nm Laser S – 850nm Laser X – Uses 8B/10B encoding R – Uses 64B/66B encoding T – Unshielded Twisted Pair C – Copper (Twinax), transmits over 4-lanes in each direction 4 (as in LX4) – represents information is transmitted over 4 wavelengths using a coarse Wavelength Division Multiplexing

  7. Upgrade Current Network or Greenfield Build • If upgrading current network • Consider bandwidth requirement of servers • Implement 10GbE uplink for 1Gb devices • High priority devices attach directly to 10GbE • Greenfield Data Center 10Gb • 10GbE to Top of Rack • Edge / Core Switching at 10Gb • 10GbE Convergence NIC/iSCSI/FCoE Throughout

  8. 10GbE Enterprise Class iSCSI/LAN – Optimized Server Virtualization LAN 1 LAN 2 iSCSI 1 iSCSI 2 Administration Administration vMotion vMotion LAN 3 iSCSI / LAN LAN 4 iSCSI / LAN 1GbE LOM and Multiple 1Gb Adapters 10GbE LOM and 10Gb Dual-Port Adapter

  9. What’s New in vSphere Networking with 10GbE

  10. VMware Networking Improvements • vMotion • Throughput improved significantly for single vMotion • ESX 3.5 – ~1.0Gbps • ESX 4.0 – ~2.6Gbps • ESX 4.1 – max 8 Gbps • ESX 5.0 – Multiple 8+ Gbps connections • Elapsed reduced by 50%+ on 10GigE tests. • Tx Worldlet • VM – VM throughput improved by 2X, to up to 19 Gbps • VM – Native throughput improved by 10% • LRO (Large Receive Offload) • Receive tests indicate 5-30% improvement in throughput • 40 - 60% decrease in CPU cost

  11. Jumbo MTU with 10GbE • Configure jumbo MTU • VM virtual adapter – change jumbo packet size value to 9k • ESX host – change vswitch MTU to 9k • NICs at both ends and all the intermediate hops/routers/switches must also support jumbo frames • Example: Configure 9k packet size on W2k8 VM virtual adapter

  12. Jumbo MTU with 10GbE • Configure jumbo MTU • On ESX host, change vSwitch1 MTU to support jumbo frames up to 9000 • Example: Configure 9k MTU to vSwitch1

  13. Jumbo Frames even more important with 10GbE • Performance Improvement in iSCSI and NFS read/write throughput using Jumbo Frames on 10GbE 64KB Blocksize

  14. NetQueues • What are NetQueues? • Provides 8 queues on each 10G port for incoming network traffic • Offloads incoming packet sorting/routing functions from hypervisor to the adapter • Frees up CPU resources • Improves platform efficiency and overall system performance • Traffic is balanced across multiple CPU cores for processing using MSI-x • Message Signaled Interrupts (MSI-x) support • Doubles the # of device interrupts generated • Balances interrupts across all CPUs in a Symmetric Multiprocessing (SMP) platform • Optimized throughput between multiple virtual machines and a physical network • Best benefit with jumbo MTU on vSwitch, VNIC, physical switch

  15. NetQueues • Without NetQueues Enabled – Only a single TX and RX queue for processing all network traffic generated by multiple virtual machines. • Enable NetQueues in ESX Host or use vSphere Client Configuring NetQueues on ESX Host:

  16. Performance Benefits Enabling NetQueues and Jumbo MTU • Throughput is increased by 2.50 Gbps – ~ NIC Line Rate • Reduces average CPU utilization/used • 35 - 50% reduction in Virtual Machine CPU utilization 1500 MTU,8VMs no NetQueue ReceiveThru = 7.36 Gbps 9000 MTU,8VMs NetQueue enabled Receive Thru = 9. Gbps Throughput Across VMs

  17. Network Traffic Management with 10GbE Dedicated NICs for various traffic types e.g. VMotion, IP Storage Bandwidth assured by dedicated physical NICs iSCSI iSCSI FT VMotion NFS FT VMotion NFS TCP/IP TCP/IP vSwitch vSwitch 1GigE Traffic Types compete. Who gets what share of the vmnic? 10 GigE Traffic typically converged over two 10GbE NICs Some traffic types & flows could dominate others through oversubscription

  18. NetIOC: Guaranteed Network Resources NetIOC disabled NFS, VM, and FT traffic take a dip during vMotion Shares & Limits as Configured NetIOC enabled NFS, VM, and FT traffic not affected by concurrent vMotion

  19. 802.1p Tagging (QoS) • 802.1p is an IEEE standard for enabling Quality of Service at MAC level. • 802.1p tagging helps provide end-to-end Quality of Service when: • All network switches in the environment treat traffic according to the tags • Tags are added based on the priority of the application/workload • You will now be able to tag any traffic flowing out of the vSphere infrastructure.

  20. VMware vMotion • Benefits of vMotion • Eliminates planned downtime • Enables dynamic load balancing • Reduces power consumption • Essential to managing the virtualized datacenter • Challenges • Larger VMs require more time to vMotion • Larger environments require more vMotions to remain balanced

  21. vSphere 5 vMotion – Multi-NIC support vSphere 5.0, out of box configuration provides 25% to 30% reduction in vMotion Time Time in seconds Using 2 NICs can provide linear scaling and a 50% reduction in vMotion time Elapsed reduced by 50%+ on 10GbE tests

  22. VMware Fault Tolerance • Single identical VMs running in lockstep on separate hosts • Zero downtime, zero data loss failover from hardware failures • No complex clustering or specialized hardware • Single common mechanism for all applications and OS-es 10GbE Benefits • 20% Better VM Performance • Easier to Manage Less NICs required • 10GbE is Highly Recommended for FT

  23. vSphere 5 – Networking Improvements (SplitRXmode) • Greatly reduce packet loss for Multicast processing vSphere 5 introduces a new way of doing network packet receive processing • Splitting the cost of receive packet processing to multiple contexts. • Each VM we can specify whether we want to have receive packet processing in the network queue context or a separate context. Results when we had more than 24 VMs powered on • Without splitRxMode up to 20%packet loss. • Enabling splitRxMode less than 0.01% packet loss. Enable SplitRxMode on a per VNIC basis • Editing vmx file of VNIC with ethernetX.emuRxMode = "1" for the ethernet device. Note: This is only available with vmxmet3.

  24. Network Latency • VM round trip latency overhead 15-20 microseconds • DirectPath reduces round trip latency by 10 µs How to reduce Network Latency • Consider disabling power management • Consider disabling physical & virtual NIC interrupt coalescing • Reduce Contention on the physical NIC • Utilize 10Gb NICS • Utilize VMXNET3 Para virtualized device driver • Utilize Directpath (if really needed) http://www.vmware.com/files/pdf/techpaper/network-io-latency-perf-vsphere5.pdf

  25. Networking & 10GbE Best Practices • Turn on Hyper-threading in BIOs • Confirm that the BIOS is set to enable all populated sockets for all cores • Enable “Turbo Mode” for processors that support it • Confirm that hardware-assisted virtualization features are enabled in the BIOS • Disable any other power-saving mode in the BIOS • Disable any unneeded devices from the BIOS, such as serial and USB ports • For Windows VMs use VMXNET3 Enhanced Virtual Adapters for best performance • Adjust Network Heap Size for excessive network traffic • By default ESX server network stack allocates 64MB of buffers to handle network data • Increase buffer allocation from 64MB to 128MB memory to handle more network data • To change Heap Size ESX Host: # excfg-advcfg –k 128 netPktHeapMaxSize

  26. Networking & 10GbE Best Practices Cont’d • Be Mindful of Converged networks, storage load can effect network • Use NIOC to balance and control network workloads • Use Distributed switches for cross ESX host Network Convenience, no significant performance impact for DvSwitch vs. vSwitch • DvSwitch Needed for NIOC and advance network shaping • Utilize the latest NIC Features • Jumbo Frames for iSCSI & NFS workloads • Utilize 10Gb Ethernet hardware • Utilize NIC Teams for fail over (HA) and NIC load balancing • Use Multiple NICs for improved vMotion Speeds • Keep an eye out for Packet Loss and Network Latency

  27. Design Considerations

  28. Design Considerations • Separate Infrastructure Traffic from VM Traffic • VMs should not see infrastructure traffic (security violation) • Method of Traffic Separation • 1. Separate logical networks (VLANs) • Create one VDS, connect all pNICs • Create portgroups with different VLANs, place vmknic and VM’s vNics on different portgroups • 2 pNics are sufficient • 2. Separate physical networks • Create one VDS for each physical network • Create portgroups, put vmknic and VM’s vNics on portgroups from different vswitches • Requires at least 4 pNICs (2 per network for redundancy)

  29. Design Considerations – Cont’d • Avoid Single point of Failure • Connect two or more physical NICs to an Uplink Port group • Preferably connect those physical NICs to separate physical switches • Understand your Virtual Infrastructure traffic flows • Make use of NetFlow feature to monitor the traffic flows over time • Use the data to come up with appropriate traffic shares to help NIOS configuration • Prioritize traffic for important workloads and infrastructure traffic • Use QoS tagging

  30. Design Considerations – Virtual Infrastructure Traffic Types

  31. Design Considerations – Traffic Characteristics / NFS

  32. Deployment Scenarios

  33. Deploying 10GbE • Customer Infrastructure • Needs • Blade Server Deployment • Two – 10Gb Interfaces • Rack Server Deployment • Two – 10Gb Interfaces • Physical Port Configurations • Multiple 1Gb NICs • Two 10Gb NICs • Physical Switch Capabilities • Switch clustering • Link State tracking

  34. Rack Server Deployment – 10GbE Interface

  35. Static- Port Group to NIC mapping

  36. Dynamic – Use NIOC, Shares and Limits • Need Bandwidth information for different traffic types • Consider using NetFlow • Bandwidth Assumption • Management – less than 1Gb • vMotion – 2Gb • iSCSI – 2Gb • FT – 1Gb • VM’s – 2Gb • Share Calculation • Equal shares to vMotion, iSCSI and Virtual Machine • Lower shares for Management and FT

  37. Dynamic – Use NIOC, Shares and Limits

  38. Blade Server Deployment – 10GbE Interface

  39. Static – Port group to NIC mapping

  40. Dynamic – Use NIOC, Shares and Limits

  41. Pros and Cons Static

  42. Pros and Cons Dynamic

  43. Emulex is HP’s Leading I/O Supplier • Partnership Across HP’s ESSN Business Units • Broadest Offering Across Operating Systems and Form Factors Enterprise Server, Storage, and Networking Industry Standard Systems Business Critical Systems Storage Division Networking ProLiant DL 10GbE uCNA Superdome2 10GbE uCNA 10GbE NIC 4/8Gb FC 8Gb/10GbE Combo ProLiant ML 10GbE uCNA 5820 P4000 P10000 ProLiant SL 10GbE uCNA Integrity Blade Tiger LOM 10GbE uCNA 10GbE NIC 8Gb FC ProLiant Bladesystems LOM-Down (G7) BLOM (Gen8) 10GbE uCNA 10GbE NIC, 8Gb FC x9000 rx2800 10GbE uCNA 10GbE NIC D2D 10GbE uCNA 16Gb FC

  44. Resources • Emulex Implementers Lab: www.implementerslab.com • Website containing technical whitepapers, application notes, videos and technical blogs. • HP VMware Solutions www.hp.com/go/vmware • VMware Technical Resource Center: http://www.vmware.com/resources/techresources/ • www.hp.com/go/ProLiantGen8

  45. Summary • Migrations from 1GbE to 10GbE deployments do require new hardware but mostly all servers and top of rack switches being deployed are now 10GbE • VMware Distributed Virtual Switch enables data center like infrastructure • VMware ESXi 5.0 complemented by 10GbE adapters provided by Emulex enable the tools and technologies required to successfully deploy 10GbE networks • HP adapters provided by Emulex deliver all the necessary technology to support VMware vSphere5.0 10GbE network deployments

  46. Emulex Breakout Sessions • Wednesday, June 6th – 2:45pm • The benefits and best practices of using 10Gb Ethernet with VMware vSphere 5.0 (session #TB3258) Join Emulex and VMware for a technical discussion to help understand the effect of transitioning from 1Gb to 10Gb Ethernet. It’s no longer about speed and feeds as it requires knowledge of new network architectures and management tools in order to setup and monitor bandwidth allocation and traffic prioritization. Storage and Network architects need to have an understanding of new storage technologies, and networking features VMware brings with VMware ESXi 5.0. • Speaker: Alex Amaya (Emulex), and VMware • Thursday, June 7th – 11:15am • Converged I/O for HP ProLiant Gen8 (session #BB3259 – Thursday, June 7th 11:15am)Join this interactive session with IT Brand Pulse to understand the benefits of HP converged networking, by exploring a customer’s real-world deployment. HP has just unveiled its next generation ProLiant Gen8 platform that accelerates server market transformation – in addition to being self-sufficient, this new platform is enabled with industry-leading I/O making them incredibly fast and flexible. The HP ProLiant Gen8 platform delivers a converged I/O that includes FlexFabric, Flex-10, iSCSI and Fibre Channel over Ethernet.....at the price of 10GbE. • Speaker, Frank Berry, CEO of IT Brand Pulse Get a Starbucks Card! Enter to win a $100 Visa Gift Card! Enter to win the 2012 Ducati Monster Motorcycle!

  47. Emulex Booth – Demos, & Grand Prize • Management • OneCommand® Manager • Emulex OCM vCenterplug-in • 8GFC HBAs – stand up and mezzanine • HP Converged Infrastructure – Path to the Cloud • FCoE • iSCSI • 10GbE Ethernet • GRAND PRIZE • 2012 Ducati Monster • Requirements for bike entry: • Listen to a booth demo • Participate in an Emulex breakout session • Announce winner on 6/7 at 1:30 PM!!!!!

More Related