1 / 78

vSphere 4.0 Storage: Features and Enhancements

vSphere 4.0 Storage: Features and Enhancements. Nathan Small Staff Engineer Rev E Last updated 3 rd August 2009. VMware Inc. Introduction.

haru
Télécharger la présentation

vSphere 4.0 Storage: Features and Enhancements

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. vSphere 4.0 Storage:Features and Enhancements Nathan Small Staff Engineer Rev E Last updated 3rd August 2009 VMware Inc

  2. Introduction • This presentation is a technical overview and deep dive of some of the features and enhancements to the storage stack and related storage components of vSphere 4.0 VI4: Storage - Slide

  3. New Acronyms in vSphere 4 • MPX = VMware Generic Multipath Device (No Unique Identifier) • NAA = Network Addressing Authority • PSA = Pluggable Storage Architecture • MPP = Multipathing Plugin • NMP = Native Multipathing • SATP = Storage Array Type Plugin • PSP = Path Selection Policy VI4: Storage - Slide

  4. vSphere Storage • Section 1 - Naming Convention Change • Section 2 - Pluggable Storage Architecture • Section 3 - iSCSI Enhancements • Section 4 - Storage Administration (VC) • Section 5 - Snapshot Volumes & Resignaturing • Section 6 - Storage VMotion • Section 7 - Volume Grow / Hot VMDK Extend • Section 8 - Storage CLI Enhancements • Section 9 - Other Storage Features/Enhancements VI4: Storage - Slide

  5. Naming Convention Change in vSphere 4 • Although the vmhbaN:C:T:L:P naming convention is visible, it is now known as the run-time name and is no longer guaranteed to be persistent through reboots. • ESX 4 now uses the unique LUN identifiers, typically the NAA (Network Addressing Authority) ID. This is true for the CLI as well as the GUI and is also the naming convention used during the install. • The IQN (iSCSI Qualified Name) is still used for iSCSI targets. • The WWN (World Wide Name) is still used for Fiber Channel targets. • For those devices which do not have a unique ID, you will observe an MPX reference (which is basically stands for VMware Multipath X device). VI4: Storage - Slide

  6. vSphere Storage • Section 1 - Naming Convention Change • Section 2 - Pluggable Storage Architecture • Section 3 - iSCSI Enhancements • Section 4 - Storage Administration (VC) • Section 5 - Snapshot Volumes & Resignaturing • Section 6 - Storage VMotion • Section 7 - Volume Grow / Hot VMDK Extend • Section 8 - Storage CLI Enhancements • Section 9 - Other Storage Features/Enhancements VI4: Storage - Slide

  7. Pluggable Storage Architecture • PSA, the Pluggable Storage Architecture, is a collection of VMkernel APIs that allow third party hardware vendors to insert code directly into the ESX storage I/O path. • This allows 3rd party software developers to design their own load balancing techniques and failover mechanisms for particular storage array types. • This also means that 3rd party vendors can now add support for new arrays into ESX without having to provide internal information or intellectual property about the array to VMware. • VMware, by default, provide a generic Multipathing Plugin (MPP) called NMP (Native Multipathing Plugin). • PSA co-ordinates the operation of the NMP and any additional 3rd party MPP. VI4: Storage - Slide

  8. PSA Tasks • Loads and unloads multipathing plugins (MPPs). • Handles physical path discovery and removal (via scanning). • Routes I/O requests for a specific logical device to an appropriate MPP. • Handles I/O queuing to the physical storage HBAs & to the logical devices. • Implements logical device bandwidth sharing between Virtual Machines. • Provides logical device and physical path I/O statistics. VI4: Storage - Slide

  9. MPP Tasks • The PSA discovers available storage paths and based on a set of predefined rules, the PSA will determine which MPP should be given ownership of the path. • The MPP then associates a set of physical paths with a specific logical device. • The specific details of handling path failover for a given storage array are delegated to a sub-plugin called a Storage Array Type Plugin (SATP). • SATP is associated with paths. • The specific details for determining which physical path is used to issue an I/O request (load balancing) to a storage device are handled by a sub-plugin called Path Selection Plugin (PSP). • PSP is associated with logical devices. VI4: Storage - Slide

  10. NMP Specific Tasks • Manage physical path claiming and unclaiming. • Register and de-registerlogical devices. • Associate physical paths with logical devices. • Process I/O requests to logical devices: • Select an optimal physical path for the request (load balance) • Perform actions necessary to handle failures and request retries. • Support management tasks such as abort or reset of logical devices. VI4: Storage - Slide

  11. Storage Array Type Plugin - SATP • An Storage Array Type Plugin (SATP) handles path failover operations. • VMware provides a default SATP for each supported array as well as a generic SATP (an active/active version and an active/passive version) for non-specified storage arrays. • If you want to take advantage of certain storage specific characteristics of your array, you can install a 3rd party SATP provided by the vendor of the storage array, or by a software company specializing in optimizing the use of your storage array. • Each SATP implements the support for a specific type of storage array, e.g. VMW_SATP_SVC for IBM SVC. VI4: Storage - Slide

  12. SATP (ctd) • The primary functions of an SATP are: • Implements the switching of physical paths to the array when a path has failed. • Determines when a hardware component of a physical path has failed. • Monitors the hardware state of the physical paths to the storage array. • There are many storage array type plug-ins. To see the complete list, you can use the following commands: • # esxcli nmp satp list • # esxcli nmp satp listrules • # esxcli nmp satp listrules –s <specific SATP> VI4: Storage - Slide

  13. Path Selection Plugin (PSP) • If you want to take advantage of more complex I/O load balancing algorithms, you could install a 3rd party Path Selection Plugin (PSP). • A PSP handles load balancing operations and is responsible for choosing a physical path to issue an I/O request to a logical device. • VMware provide three PSP: Fixed, MRU or Round Robin. • # esxcli nmp psp list Name Description VMW_PSP_MRU Most Recently Used Path Selection VMW_PSP_RR Round Robin Path Selection VMW_PSP_FIXED Fixed Path Selection VI4: Storage - Slide

  14. NMP Supported PSPs • Most Recently Used (MRU) — Selects the first working path discovered at system boot time. If this path becomes unavailable, the ESX host switches to an alternative path and continues to use the new path while it is available. • Fixed — Uses the designated preferred path, if it has been configured. Otherwise, it uses the first working path discovered at system boot time. If the ESX host cannot use the preferred path, it selects a random alternative available path. The ESX host automatically reverts back to the preferred path as soon as the path becomes available. • Round Robin (RR) – Uses an automatic path selection rotating through all available paths and enabling load balancing across the paths. VI4: Storage - Slide

  15. Enabling Additional Logging on vSphere 4.0 • For additional SCSI Log Messages, set: • Scsi.LogCmdErrors = "1“ • Scsi.LogMPCmdErrors = "1“ • At GA, the default setting for Scsi.LogMPCmdErrors is "1“ • These can be found in the Advanced Settings. VI4: Storage - Slide

  16. Viewing Plugin Information • The following command lists all multipathing modules loaded on the system. At a minimum, this command returns the default VMware Native Multipath (NMP) plugin & the MASK_PATH plugin. Third-party MPPs will also be listed if installed: # esxcfg-mpath -G MASK_PATH NMP • For ESXi, the following VI CLI 4.0 command can be used: # vicfg-mpath –G –-server <IP> --username <X> --password <Y> MASK_PATH NMP • LUN path masking is done via the MASK_PATH Plug-in. VI4: Storage - Slide

  17. Viewing Device Information • The command esxcli nmp device list lists all devices managed by the NMP plug-in and the configuration of that device, e.g.: # esxcli nmp device list naa.600601601d311f001ee294d9e7e2dd11 Device Display Name: DGC iSCSI Disk (naa.600601601d311f001ee294d9e7e2dd11) Storage Array Type: VMW_SATP_CX Storage Array Type Device Config: {navireg ipfilter} Path Selection Policy: VMW_PSP_MRU Path Selection Policy Device Config: Current Path=vmhba33:C0:T0:L1 Working Paths: vmhba33:C0:T0:L1 mpx.vmhba1:C0:T0:L0 Device Display Name: Local VMware Disk (mpx.vmhba1:C0:T0:L0) Storage Array Type: VMW_SATP_LOCAL Storage Array Type Device Config: Path Selection Policy: VMW_PSP_FIXED Path Selection Policy Device Config: {preferred=vmhba1:C0:T0:L0;current=vmhba1:C0:T0:L0} Working Paths: vmhba1:C0:T0:L0 NAA is the Network Addressing Authority (NAA) identifier guaranteed to be unique Specific configuration for EMC Clariion & Invista products mpx is used as an identifier for devices that do not have their own unique ids VI4: Storage - Slide

  18. Viewing Device Information (ctd) • Get current path information for a specified storage device managed by the NMP. # esxcli nmp device list -d naa.600601604320170080d407794f10dd11 naa.600601604320170080d407794f10dd11 Device Display Name: DGC Fibre Channel Disk (naa.600601604320170080d407794f10dd11) Storage Array Type: VMW_SATP_CX Storage Array Type Device Config: {navireg ipfilter} Path Selection Policy: VMW_PSP_MRU Path Selection Policy Device Config: Current Path=vmhba2:C0:T0:L0 Working Paths: vmhba2:C0:T0:L0 VI4: Storage - Slide

  19. Viewing Device Information (ctd) • Lists all paths available for a specified storage device on ESX: • # esxcfg-mpath -b -d naa.600601601d311f001ee294d9e7e2dd11 • naa.600601601d311f001ee294d9e7e2dd11 : DGC iSCSI Disk (naa.600601601d311f001ee294d9e7e2dd11) vmhba33:C0:T0:L1 LUN:1state:active iscsi Adapter: iqn.1998-01.com.vmware:cs-tse-h33-34f33b4b Target: IQN=iqn.1992-04.com.emc:cx.ck200083700716.b0 Alias= Session=00023d000001 PortalTag=1 vmhba33:C0:T1:L1 LUN:1 state:standby iscsi Adapter: iqn.1998-01.com.vmware:cs-tse-h33-34f33b4b Target: IQN=iqn.1992-04.com.emc:cx.ck200083700716.a0 Alias= Session=00023d000001 PortalTag=2 • ESXi has an equivalent vicfg-mpath command. VI4: Storage - Slide

  20. Viewing Device Information (ctd) • # esxcfg-mpath -l -d naa.6006016043201700d67a179ab32fdc11 • iqn.1998-01.com.vmware:cs-tse-h33-34f33b4b-00023d000001,iqn.1992-04.com.emc:cx.ck200083700716.a0,t,2-naa.600601601d311f001ee294d9e7e2dd11 • Runtime Name: vmhba33:C0:T1:L1 • Device: naa.600601601d311f001ee294d9e7e2dd11 • Device Display Name: DGC iSCSI Disk (naa.600601601d311f001ee294d9e7e2dd11) • Adapter: vmhba33 Channel: 0 Target: 1 LUN: 1 • Adapter Identifier: iqn.1998-01.com.vmware:cs-tse-h33-34f33b4b • Target Identifier: 00023d000001,iqn.1992-04.com.emc:cx.ck200083700716.a0,t,2 • Plugin: NMP • State: standby • Transport: iscsi • Adapter Transport Details: iqn.1998-01.com.vmware:cs-tse-h33-34f33b4b • Target Transport Details: IQN=iqn.1992-04.com.emc:cx.ck200083700716.a0 Alias= Session=00023d000001 PortalTag=2 • iqn.1998-01.com.vmware:cs-tse-h33-34f33b4b-00023d000001,iqn.1992-04.com.emc:cx.ck200083700716.b0,t,1-naa.600601601d311f001ee294d9e7e2dd11 • Runtime Name: vmhba33:C0:T0:L1 • Device: naa.600601601d311f001ee294d9e7e2dd11 • Device Display Name: DGC iSCSI Disk (naa.600601601d311f001ee294d9e7e2dd11) • Adapter: vmhba33 Channel: 0 Target: 0 LUN: 1 • Adapter Identifier: iqn.1998-01.com.vmware:cs-tse-h33-34f33b4b • Target Identifier: 00023d000001,iqn.1992-04.com.emc:cx.ck200083700716.b0,t,1 • Plugin: NMP • State: active • Transport: iscsi • Adapter Transport Details: iqn.1998-01.com.vmware:cs-tse-h33-34f33b4b • Target Transport Details: IQN=iqn.1992-04.com.emc:cx.ck200083700716.b0 Alias= Session=00023d000001 PortalTag=1 Storage array (target) iSCSI Qualified Names (IQNs) VI4: Storage - Slide

  21. Third-Party Multipathing Plug-ins (MPPs) • You can install the third-party multipathing plug-ins (MPPs) when you need to change specific load balancing and failover characteristics of ESX/ESXi. • The third-party MPPs replace the behaviour of the NMP and entirely take control over the path failover and the load balancing operations for certain specified storage devices. VI4: Storage - Slide

  22. Third-Party SATP & PSP • Third-party SATP • Generally developed by third-party hardware manufacturers who have ‘expert’ knowledge of the behaviour of their storage devices. • Accommodates specific characteristics of storage arrays and facilitates support for new arrays. • Third-party PSP • Generally developed by third-party software companies. • More complex I/O load balancing algorithms. • NMP coordination • Third-party SATPs and PSPsare coordinated by the NMP, and can be simultaneously used with the VMware SATPs and PSPs. VI4: Storage - Slide

  23. vSphere Storage • Section 1 - Naming Convention Change • Section 2 - Pluggable Storage Architecture • Section 3 - iSCSI Enhancements • Section 4 - Storage Administration (VC) • Section 5 - Snapshot Volumes & Resignaturing • Section 6 - Storage VMotion • Section 7 - Volume Grow / Hot VMDK Extend • Section 8 - Storage CLI Enhancements • Section 9 - Other Storage Features/Enhancements VI4: Storage - Slide

  24. iSCSI Enhancements • ESX 4 includes an updated iSCSI stack which offers improvements to both software iSCSI (initiator that runs at the ESX layer) and hardware iSCSI (a hardware-optimized iSCSI HBA). • For both software and hardware iSCSI, functionality (e.g. CHAP support, digest acceleration, etc.) and performance are improved. • Software iSCSI can now be configured to use host based multipathing if you have more than one physical network adapter. • In the new ESX 4.0 Software iSCSI stack, there is no longer any requirement to have a Service Console connection to communicate to an iSCSI target. VI4: Storage - Slide

  25. Software iSCSI Enhancements • iSCSI Advanced Settings • In particular, data integrity checks in the form of digests. • CHAP Parameters Settings • A user will be able to specify CHAP parameters as per-target CHAP and mutual per-target CHAP. • Inheritance model of parameters. • A global set of configuration parameters can be set on the initiator and propagated down to all targets. • Per target/discovery level configuration. • Configuration settings can now be set on a per target basis which means that a customer can uniquely configure parameters for each array discovered by the initiator. VI4: Storage - Slide

  26. Software iSCSI Multipathing – Port Binding • You can now create a port binding between a physical NIC and a iSCSI VMkernel port in ESX 4.0. • Using the “port binding" feature, users can map the multiple iSCSI VMkernel ports to different physical NICs. This will enable the software iSCSI initiator to use multiple physical NICs for I/O transfer. • Connecting the software iSCSI initiator to the VMkernel ports can only be done from the CLI using the esxcli swiscsi commands. • Host based multipathing can then manage the paths to the LUN. • In addition, Round Robin path policy can be configured to simultaneously use more than one physical NIC for the iSCSI traffic to the iSCSI. VI4: Storage - Slide

  27. Hardware iSCSI Limitations • Mutual Chap is disabled. • Discovery is supported by IP address only (storage array name discovery not supported). • Running with the Hardware and Software iSCSI initiator enabled on the same host at the same time is not supported. VI4: Storage - Slide

  28. vSphere Storage • Section 1 - Naming Convention Change • Section 2 - Pluggable Storage Architecture • Section 3 - iSCSI Enhancements • Section 4 - Storage Administration (VC) • Section 5 - Snapshot Volumes & Resignaturing • Section 6 - Storage VMotion • Section 7 - Volume Grow / Hot VMDK Extend • Section 8 - Storage CLI Enhancements • Section 9 - Other Storage Features/Enhancements VI4: Storage - Slide

  29. GUI Changes - Display Device Info Note that there are no further references to vmhbaC:T:L. Unique device identifiers such as the NAA id are now used. VI4: Storage - Slide

  30. GUI Changes - Display HBA Configuration Info Again, notice the use of NAA ids rather than vmhbaC:T:L. VI4: Storage - Slide

  31. GUI Changes - Display Path Info Note the reference to the PSP & SATP Note the (I/O) status designating the active path VI4: Storage - Slide

  32. GUI Changes - Data Center Rescan VI4: Storage - Slide

  33. Degraded Status • If we detect less than 2 HBAs or 2 Targets in the paths of the datastore, we mark the datastore multipathing status as “Partial/No Redundancy“ in the Storage Views. VI4: Storage - Slide

  34. Storage Administration • VI4 also provides new monitoring, reporting and alarm features for storage management. • This now gives an administrator of a vSphere the ability to: • Manage access/permissions of datastores/folders • Have visibility of a Virtual Machine’s connectivity to the storage infrastructure • Account for disk space utilization • Provide notification in case of specific usage conditions VI4: Storage - Slide

  35. Datastore Monitoring & Alarms • vSphere introduces new datastore and VM-specific alarms/alerts on storage events: • New datastore alarms: • Datastore disk usage % • Datastore Disk Over allocation % • Datastore connection state to all hosts • New VM alarms: • VM Total size on disk (GB) • VM Snapshot size (GB) Customer’s can now track snapshot usage VI4: Storage - Slide

  36. New Storage Alarms This alarms allow the tracking of Thin Provisioned disks This alarms will trigger if a datastore becomes unavailble to the host This alarms will trigger if a snapshot delta file becomes too large New Datastore specific Alarms New VM specific Alarms VI4: Storage - Slide

  37. vSphere 4 Storage • Section 1 - Naming Convention Change • Section 2 - Pluggable Storage Architecture • Section 3 - iSCSI Enhancements • Section 4 - Storage Administration (VC) • Section 5 - Snapshot Volumes & Resignaturing • Section 6 - Storage VMotion • Section 7 - Volume Grow / Hot VMDK Extend • Section 8 - Storage CLI Enhancements • Section 9 - Other Storage Features/Enhancements VI4: Storage - Slide

  38. Traditional Snapshot Detection • When an ESX 3.x server finds a VMFS-3 LUN, it compares the SCSI_DiskID information returned from the storage array with the SCSI_DiskID information stored in the LVM Header. • If the two IDs don’t match, then by default, the VMFS-3 volume will not be mounted and thus be inaccessible. • A VMFS volume on ESX 3.x could be detected as a snapshot for a number of reasons: • LUN ID changed • SCSI version supported by array changed (firmware upgrade) • Identifier type changed – Unit Serial Number vs NAA ID VI4: Storage - Slide

  39. New Snapshot Detection Mechanism • When trying to determine if a device is a snapshot in ESX 4.0, the ESX uses a globally unique identifier to identify each LUN, typically the NAA (Network Addressing Authority) ID. • NAA IDs are unique and are persistent across reboots. • There are many different globally unique identifiers (EUI, SNS, T10, etc). If the LUN does not support any of these globally unique identifiers, ESX will fall back to the serial number + LUN ID used in ESX 3.0. VI4: Storage - Slide

  40. SCSI_DiskId Structure • The internal VMkernel structure SCSI_DiskId is populated with information about a LUN. • This is stored in the metadata header of a VMFS volume. • if the LUN does have a globally unique (NAA) ID, the field SCSI_DiskId.data.uid in the SCSI_DiskId structure will hold it. • If the NAA ID in the SCSI_DiskId.data.uid stored in the metadata does not match the NAA ID returned by the LUN, the ESX knows the LUN is a snapshot. • For older arrays that do not support NAA IDs, the earlier algorithm is used where we compare other fields in the SCSI_DISKID structure to detect whether a LUN is a snapshot or not. VI4: Storage - Slide

  41. Snapshot Log Messages • 8:00:45:51.975 cpu4:81258)ScsiPath: 3685: Plugin 'NMP' claimed path 'vmhba33:C0:T1:L2' • 8:00:45:51.975 cpu4:81258)ScsiPath: 3685: Plugin 'NMP' claimed path 'vmhba33:C0:T0:L2' • 8:00:45:51.977 cpu2:81258)VMWARE SCSI Id: Id for vmhba33:C0:T0:L2 • 0x60 0x06 0x01 0x60 0x1d 0x31 0x1f 0x00 0xfc 0xa3 0xea 0x50 0x1b 0xed 0xdd 0x11 0x52 0x41 0x49 0x44 0x20 0x35 • 8:00:45:51.978 cpu2:81258)VMWARE SCSI Id: Id for vmhba33:C0:T1:L2 • 0x60 0x06 0x01 0x60 0x1d 0x31 0x1f 0x00 0xfc 0xa3 0xea 0x50 0x1b 0xed 0xdd 0x11 0x52 0x41 0x49 0x44 0x20 0x35 • 8:00:45:52.002 cpu2:81258)LVM: 7125: Device naa.600601601d311f00fca3ea501beddd11:1 detected to be a snapshot: • 8:00:45:52.002 cpu2:81258)LVM: 7132: queried disk ID: <type 2, len 22, lun 2, devType 0, scsi 0, h(id) 3817547080305476947> • 8:00:45:52.002 cpu2:81258)LVM: 7139: on-disk disk ID: <type 2, len 22, lun 1, devType 0, scsi 0, h(id) 6335084141271340065> • 8:00:45:52.006 cpu2:81258)ScsiDevice: 1756: Successfully registered device "naa.600601601d311f00fca3ea501beddd11" from plugin " VI4: Storage - Slide

  42. Resignature & Force-Mount • We have a new naming convention in ESX 4. • “Resignature” is equivalent to EnableResignature = 1 in ESX 3.x. • “Force-Mount” is equivalent to DisallowSnapshotLUN = 0 in ESX 3.x. • The advanced configuration options EnableResignature and DisallowSnapshotLUN have been replaced in ESX 4 with a new CLI utility called esxcfg-volume (vicfg-volume for ESXi). • Historically, the EnableResignature and DisallowSnapshotLUN were applied server wide and applied to all volumes on an ESX. The new Resignature and Force-mount are volume specific so offer much greater granularity in the handling of snapshots. VI4: Storage - Slide

  43. Persistent Or Non-Persistent Mounts • If you use the GUI to force-mount a VMFS volume, it will make it a persistent mount which will remain in place through reboots of the ESX host. VC will not allow this volume to be resignatured. • If you use the CLI to force-mount a VMFS volume, you can choose whether it persists or not through reboots. • Through the GUI, the Add Storage Wizard now displays the VMFS label. Therefore if a device is not mounted, but it has a label associated with it, you can make the assumption that it is a snapshot, or to use ESX 4 terminology, a Volume Copy. VI4: Storage - Slide

  44. Mounting A Snapshot Snapshot – notice that the volume label is the same as the original volume. Original Volume is still presented to the ESX VI4: Storage - Slide

  45. Snapshot Mount Options • Keep Existing Signature – this is a force-mount operation: similar to disabling DisallowSnapshots in ESX 3.x. New datastore has original UUID saved in the file system header. • If the original volume is already online, this option will not succeed and will print a ‘Cannot change the host configuration’ message when resolving the VMFS volumes.. • Assign a new Signature – this is a resignature operation: similar to enabling EnableResignature in ESX 3.x. New datastore has a new UUID saved in the file system header. • Format the disk – destroys the data on the disk and creates a new VMFS volume on it. VI4: Storage - Slide

  46. New CLI Command: esxcfg-volume • There is a new CLI command in ESX 4 for resignaturing VMFS snapshots. Note the difference between ‘-m’ and ‘-M’: # esxcfg-volume esxcfg-volume <options> -l|--list List all volumes which have been detected as snapshots/replicas. -m|--mount <VMFS UUID|label> Mount a snapshot/replica volume, if its original copy is not online. -u|--umount <VMFS UUID|label> Umount a snapshot/replica volume. -r|--resignature <VMFS UUID|label> Resignature a snapshot/replica volume. -M|--persistent-mount <VMFS UUID|label> Mount a snapshot/replica volume persistently, if its original copy is not online. -h|--help Show this message. VI4: Storage - Slide

  47. esxcfg-volume (ctd) • The difference between a mount and a persistent mount is that the persistent mounts will be maintained through reboots. • ESX manages this by adding entries for force mounts into the /etc/vmware/esx.conf. • A typical set of entries for a force mount look like: • /fs/vmfs[48d247dd-7971f45b-5ee4-0019993032e1]\ /forceMountedLvm/forceMount = "true" • /fs/vmfs[48d247dd-7971f45b-5ee4-0019993032e1]\ /forceMountedLvm/lvmName = "48d247da-b18fd17c-1da1-0019993032e1" • /fs/vmfs[48d247dd-7971f45b-5ee4-0019993032e1]\ /forceMountedLvm/readOnly = "false" VI4: Storage - Slide

  48. Mount With the Original Volume Still Online • /var/log # esxcfg-volume -l • VMFS3 UUID/label: 496f202f-3ff43d2e-7efe-001f29595f9d/Shared_VMFS_For_FT_VMs • Can mount: No (the original volume is still online) • Can resignature: Yes • Extent name: naa.600601601d311f00fca3ea501beddd11:1 range: 0 - 20223 (MB) • /var/log # esxcfg-volume -m 496f202f-3ff43d2e-7efe-001f29595f9d • Mounting volume 496f202f-3ff43d2e-7efe-001f29595f9d • Error: Unable to mount this VMFS3 volume due to the original volume is still online VI4: Storage - Slide

  49. esxcfg-volume (ctd) • In this next example, a clone LUN of a VMFS LUN is presented back to the same ESX server. Therefore we cannot use either the mount or the persistent mount options since the original LUN is already presented to the host so we will have to resignature: # esxcfg-volume -l VMFS3 UUID/label: 48d247dd-7971f45b-5ee4-0019993032e1/cormac_grow_vol Can mount: No Can resignature: Yes Extent name: naa.6006016043201700f30570ed09f6da11:1 range: 0 - 15103 (MB) VI4: Storage - Slide

  50. esxcfg-volume (ctd) • # esxcfg-volume -r 48d247dd-7971f45b-5ee4-0019993032e1 • Resignaturing volume 48d247dd-7971f45b-5ee4-0019993032e1 • # vdf • Filesystem 1K-blocks Used Available Use% Mounted on • /dev/sdg2 5044188 1595804 3192148 34% / • /dev/sdd1 248895 50780 185265 22% /boot • . • . • /vmfs/volumes/48d247dd-7971f45b-5ee4-0019993032e1 • 15466496 5183488 10283008 33% /vmfs/volumes/cormac_grow_vol • /vmfs/volumes/48d39951-19a5b934-67c3-0019993032e1 • 15466496 5183488 10283008 33% /vmfs/volumes/snap-397419fa-cormac_grow_vol Warning – there is no vdf command in ESXi. However the df command reports on VMFS filesystems in ESXi. VI4: Storage - Slide

More Related