1 / 33

VMUG storage gå-hjem møde VMware vSphere 5 Update - Storage Integration

VMUG storage gå-hjem møde VMware vSphere 5 Update - Storage Integration. Morten Petersen Sr Technology Consultant Tlf : 2920 2328 morten.petersen@emc.com. Core Storage & Infrastructure Related Topics. This Section Will Cover:. vStorage APIs for Array Integration (VAAI) – expansion

lorie
Télécharger la présentation

VMUG storage gå-hjem møde VMware vSphere 5 Update - Storage Integration

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. VMUG storage gå-hjem mødeVMware vSphere 5 Update - Storage Integration Morten Petersen Sr TechnologyConsultant Tlf: 2920 2328 morten.petersen@emc.com

  2. Core Storage & Infrastructure Related Topics This Section Will Cover: vStorageAPIs for Array Integration (VAAI) – expansion vStorage Storage APIs for Storage Awareness (VASA) Storage vMotionEnhancements Storage DRS

  3. VAAI = vStorageAPIs for Array Integration A set of APIs to allow ESX to offload functions to storage arrays In vSphere 4.1, supported on VMware File Systems (VMFS) and Raw Device Mappings (RDM) volumes, vSphere 5 adds NFS VAAI APIs. Supported by EMC VNX, CX/NS, VMAX arrays (coming soon to Isilon) Goals Remove bottlenecks Offload expensive data operations to storage arrays Motivation Efficiency Scaling Understanding VAAI a little “lower” VI3.5 (fsdm) vSphere 4 ( fs3dm - software) vSphere 4.1/5 (hardware) = VAAI Diagram from VMworld 2009 TA3220 – Satyam Vaghani

  4. Growing list of VAAI hardware offloads • vSphere 4.1 • For Block Storage: HW Accelerated Locking HW Accelerated Zero HW Accelerated Copy • For NAS storage: None • vSphere 5 • For Block Storage: Thin Provision Stun Space Reclaim • For NAS storage: Full Clone Extended Stats Space Reservation

  5. VAAI in vSphere 4.1 = Big impact http://www.emc.com/collateral/hardware/white-papers/h8115-vmware-vstorage-vmax-wp.pdf

  6. vSphere 5 – Thin Provision Stun Allocate VMFS Allocate VMFS Allocate VMFS • Without API • When a datastore cannot allocate in VMFS because of an exhaustion of free blocks in the LUN pool (in the array) this causes VMs to crash, snapshots to fail, and other badness. • Not a problem with “Thick devices”, as allocation is fixed. • Thin LUNs can fail to deliver a write BEFORE the VMFS is full • Careful management at VMware and Array level needed • With API • Rather than erroring on the write, array reports new error message • On receiving this command, VMs are “stunned”, giving the opportunity to expand the thin pool at the array level. VMFS-5 Extent SCSI WRITE - OK SCSI WRITE - OK SCSI WRITE – ERROR! Thin LUNs ! ! ! Utilization Storage Pool (free blocks) VMDK VMDK VMDK

  7. vSphere 5 – Space Reclamation • Without API • When VMFS deletes a file, the file allocations are returned for use, and in some cases, SCSI WRITE ZERO would zero out the blocks. • If the blocks were zeroed, manual space reclamation at the device layer could help. • With API • Rather of SCSI WRITE ZERO, SCSI UNMAP is used. • The array releases the blocks back to the free pool. • Is used anytime VMFS deletes (svMotion, Delete VM, Delete Snapshot, Delete) • Note that in vSphere 5, SCSI UNMAP is used in many other places where previously SCSI WRITE ZERO would be used, and depends on VMFS-5 CREATE FILE CREATE FILE CREATE FILE CREATE FILE DELETE FILE DELETE FILE VMFS-5 Extent SCSI WRITE - DATA SCSI WRITE - DATA SCSI WRITE - DATA SCSI WRITE - DATA SCSI WRITE - ZERO SCSI UNMAP Utilization Storage Pool (free blocks) VMDK VMDK

  8. vSphere 5 – NFS Full Copy • Without API • Some NFS servers have the ability to create file-level replicas • This feature was not used for VMware operations – which were traditional host-based file copy operations. • Vendors would leverage them via vCenter plugins. An example was EMC exposed this array feature via the Virtual Storage Integrator Plugin Unified Storage Module. • With API • Implemented via NAS vendor plugin, used by vSphere for clone, deploy from template • Uses EMC VNX OE File file version • Somewhat analagous to block XCOPY hardware offload • NOTE – not used during svMotion NFS Mount Extent “let’s clone this VM” “let’s clone this VM” ESX Host File Read File Read File Read ..MANY times… File Write File Write File Write ..MANY times… “Create a copy (snap, clone, version) of the file NFS Server Filesystem FOO-COPY.VMDK FOO.VMDK

  9. vSphere 5 – NFS Extended Stats “just HOW much space does this file take?” “just HOW much space does this file take?” • Without API • Unlike with VMFS, with NFS datastores, vSphere does not control the filesystem itself. • With the vSphere 4.x client – only basic file and filesystem attributes were used • This lead to challenges with managing space when thin VMDKs were used, and administrators had no visibility to thin state and oversubscription of both datastores and VMDKs. • think: with thin LUNs under VMFS, you could at least see details on thin VMDKs) • With API • Implemented via NAS vendor plugin • NFS client reads extended file/filesystem details NFS Mount Extent ESX Host “Filesize = 100GB, but it’s a sparse file and has 24GB of allocations in the filesystem. It is deduped – so it’s only REALLY using 5GB” “Filesize = 100GB” NFS Server Filesystem FOO.VMDK

  10. vSphere 5 – NFS Reserve Space • Without API • There was no way on NFS datastores to do the equivalent of an “eagerzeroed thick” VMDK (needed for WSFC) or a “zeroed thick” VMDK • With API • Implemented via NAS vendor plugin • Reserves the complete space for a VMDK on an NFS datastore

  11. <Video>

  12. What Is VASA? • VASA is an Extension of the vSphere Storage APIs, vCenter-based extensions. It allows storage arrays to integrate with vCenter for management functionality via server-side plug-ins or Vendor Providers. • Allows a vCenter administrator to be aware of the topology, capabilities, and state of the physical storage devices available to the cluster. Think of it as saying: “this datastore is protected with RAID 5, replicated with a 10 minute RPO, snapshotted every 15 minutes, and is compressed and deduplicated”. • VASA enables several features: • It delivers System-defined (array-defined) Capabilities that enables Profile-driven Storage. • It provides array internal information that helps Storage DRS work optimally with various arrays.

  13. How VASAWorks VASA allows a storage vendor to develop a software component called a VASA provider for its storage arrays. • A VASA provider gets information from the storage array about available storage topology, capabilities, and state vCenterServer5.0 vSphereClient VASAProvider EMC storage The vCenter Server connects toa VASA Provider. • Information from the VASA provideris displayed in the vSphere Client.

  14. Storage Policy • Once the VASA Provider has been successfully added to vCenter, the VM Storage Profiles displays the storage capabilities from the Vendor Provider. • For EMC in Q3, this was provided for VNX and VMAX via Solutions Enabler for block storage. NFS will require user-defined. • In the future, VNX will have a native provider, and will gain NFS system-defined profiles • Isilon VASA support is targeted for Q4

  15. Profile Driven Storage Profile driven storage enables the creation of datastores which provide varying levels of service. Profile driven storage can be used to • Categorize datastores based on system- or user-defined levels of service • For example, user-defined levels might be gold, silver, and bronze. • Provision virtual machine’s disks on “correct” storage • Check that virtual machines comply with user-defined storage requirements gold silver bronze unknown compliant not compliant

  16. Create VM Storage Profile andCapabilities Home - VM Storage Profile

  17. Using the Virtual Machine Storage Profile Use the virtual machine storage profile when you create, clone, or migrate a virtual machine.

  18. Storage Profile During Provisioning • By selecting a VM Storage Profile, datastores are now split into Compatible & Incompatible. • The Celerra_NFS datastore is the only datastore which meets the GOLD Profile (user defined) requirements

  19. VM Storage Profile Compliance

  20. Storage vMotion – Enhancements • New Functionality • In vSphere 5.0, Storage vMotion uses a new mirroring architecture (vs. old snapshot method) to provide the following advantages over previous versions: • Guarantees migration success even when facing a slower destination. • More predictable (and shorter) migration time. • New Features • Storage vMotion in vSphere 5 works with Virtual Machines that have snapshots • means coexistence with other VMware products & features such as VDR & vSphere Replication. • Storage vMotion will support the relocation of linked clones. • New Use Case • Storage DRS

  21. What Does Storage DRS Solve? • Without Storage DRS: • Identify the datastore with the most disk space and lowest latency. • Validate which virtual machines are placed on the datastore and ensure there are no conflicts. • Create Virtual Machine and hope for the best. • With Storage DRS: • Automatic selection of the best placement for your VM • Advanced balancing mechanism to avoid storage performance bottlenecks or “out of space” problems. • VM or VMDK Affinity Rules.

  22. Datastore Cluster • A group of datastores called a “datastore cluster” • Think: • Datastore Cluster - Storage DRS = Simply a group of datastores (like a datastore folder) • Datastore Cluster + Storage DRS = resource pool analagous to a DRS Cluster. • Datastore Cluster + Storage DRS + Profile-Driven Storage = nirvana  2TB datastore cluster 500GB 500GB 500GB 500GB datastores

  23. Storage DRS – Initial Placement • Initial Placement – VM/VMDK create/clone/relocate. • When creating a VM you select a datastore cluster rather than an individual datastore • SDRS recommends a datastore based on space utilization and I/O load. • By default, all the VMDKs of a VM will be placed on the same datastore within a datastore cluster (VMDK Affinity Rule), but you can choose to have VMDKs assigned to different datastore clusters. 2TB datastore cluster datastores 500GB 500GB 500GB 500GB 300GB available 260GB available 265GB available 275GB available

  24. Storage DRS QoS Operations When using EMC FAST VP, use SDRS, but disable I/O metric here. This combination gives you the simplicity benefits of SDRS for automated placement and capacity balancing but adds: Economic and performance benefits of automated tiering across SSD, FC, SAS, SATA 10x (VNX) and 100x (VMAX) higher granularity (sub VMDK) • SDRS triggers action on either capacity and/or latency • Capacity stats are constantly gathered by vCenter, default threshold 80%. • I/O load trend is evaluated (default) every 8 hours based on a past day history, default threshold 15ms. • Storage DRS will do a cost / benefit analysis! • For latency Storage DRS leverages Storage I/O Control functionality.

  25. EMC VFCache Performance. Intelligence. Protection.

  26. What If You Could Achieve an Order of Magnitude Better Performance? PCIe Flash technology IOPS/GB

  27. Performance Results: Traditional Architecture 1 2 3 4 FAST Policy 1 3% 0 97% EFD FC HDD 6 8 7 5 SATA HDD Read Latency: ~640 μs – 7.5 ms, Write Latency: ~550 μs – 11 ms • Reads and writes are serviced by the storage array • Performance varies depending on back-end array’s media, workload, and network * VNX7500 with 20 SSDs and 20 HDDs; typical loads with 32 outstanding I/Os

  28. Performance Results: VFCache Advanced Architecture 1 2 3 4 FAST Policy 1 3% 0 97% EFD 5 FC HDD 9 8 7 6 SATA HDD Read Latency: ~<100 μs Write Latency: ~550 μs – 11 ms • Reads are serviced by VFCache for Performance • Writes are passed through to the storage array for Protection * VNX7500 with 20 SSDs and 20 HDDs; typical loads with 32 outstanding I/Os

  29. 100 Percent Transparent Caching VFCache Driver extends your SAN Application VFCache Driver SANHBA PCIeFlash SAN storage

  30. vStorage APIs • vStorage APIs are a “family” • vStorage API for Array Integration (VAAI) • vStorage API for Site Recovery Manager • vStorage API for Data Protection • vStorage API for Multipathing • vStorage API for Storage Awareness (VASA)

  31. vCenter Plug-ins • VMware administrators already use vCenter to manage their organization’s • ESX/ESXi Clusters • Virtual Machines • Purpose-built plug-ins to the vCenter management interface allow VMware administrators to • Provision • Manage • Monitor their storage from vCenter as well.

  32. EMC now has ONE vCenter Plug-in to do it all:Virtual Storage Integrator (VSI) • VSI feature menu structure

More Related