Blog Archives

vSphere 5 – Storage pt.1 VMFS and Provisioning

VMFS:

VMware® vStorage Virtual Machine File System (VMFS) is a high-performance cluster file system that provides storage virtualization optimized for virtual machines. Each virtual machine is encapsulated in a small set of files and VMFS is the default storage system for these files on physical SCSI disks and partitions. This File system enables the use of VMware® cluster features of DRS, High-Availability, and other storage enhancements.
For more information please see the following document here and the following KB here.

Upgrading VMFS:

There are two ways to upgrade the VMFS to version 5 from previous 3.xx. An important for when upgrading VMFS-5 or provisioning new VMFS-5 is that legacy ESX host will not be able to see the new VMFS partitions. This is because of the enhancements made into ESX and the partitioning. Upgrading VMFS-5 is irreversible and consider always what you are doing. Lastly, there are many ways to provision VMFS-5 these are just two of the more common ways of doing it.

Method 1: Online Upgrade

Although an online upgrade does give you some of the new features in VMFS-5 it does not give you all of them. However, it is the least impacting and can be performed at anytime without an outage. Below are the features you will not gain by doing an in-place upgrade:

  • VMFS-5 upgraded from VMFS-3 continues to use the previous file block size which may be larger than the unified 1MB file block size.
  • VMFS-5 upgraded from VMFS-3 continues to use 64KB sub-blocks and not new 8K sub-blocks.
  • VMFS-5 upgraded from VMFS-3 continues to have a file limit of 30720 rather than new file limit of > 100000 for newly created VMFS-5.
  • VMFS-5 upgraded from VMFS-3 continues to use MBR (Master Boot Record) partition type; when the VMFS-5 volume is grown above 2TB, it automatically & seamlessly switches from MBR to GPT (GUID Partition Table) with no impact to the running VMs.
  • VMFS-5 upgraded from VMFS-3 continue to have its partition starting on sector 128; newly created VMFS5 partitions will have their partition starting at sector 2048.

RDM – Raw Device Mappings

  • There is now support for passthru RDMs to be ~ 60TB in size.
  • Non-passthru RDMs are still limited to 2TB – 512 bytes.
  • Both upgraded VMFS-5 & newly created VMFS-5 support the larger passthru RDM.

The end result in using the in place upgrade can be the following:

  • Performance is not optimal
  • non-standards can still be in place
  • Disk Alignment will be a consistent issue with older environments
  • File limit can be impacting in some cases

Method 1: How to perform an “Online” upgrade for VMFS-5

Upgrading a VMFS-3 to a VMFS-5 file system is a single-click operation. Once you have upgraded the host to VMware ESXi™ 5.0, go to the Configuration tab > Storage view. Select the VMFS-3 datastore, and above the Datastore Details window, an option Upgrade to VMFS-5 will be displayed:


Figure 3. Upgrade to VMFS-5

The upgrade process is online and non-disruptive. Virtual machines can continue to run on the VMFS-3 datastore while it is being upgraded. Upgrading the VMFS file system version is a one-way operation. There is no option to reverse the upgrade once it is executed. Additionally, once a file system has been upgraded, it will no longer be accessible by older ESX/ESXi 4.x hosts, so you need to ensure that all hosts accessing the datastore are running ESXi 5.0. In fact, there are checks built into vSphere which will prevent you from upgrading to VMFS-5 if any of the hosts accessing the datastore are running a version of ESX/ESXi that is older than 5.0.

As with any upgrade, VMware recommends that a backup of your file system is made prior to upgrading your VMFS-3 file system to VMFS-5.

Once the VMFS-5 volume is in place, the size can be extended to 64TB, even if it is a single extent, and ~2TB Virtual Machine Disks (VMDKs) can be created, no matter what the underlying file-block size is. These features are available ‘out of the box’ without any additional configuration steps.

NOTE: Some documentation are excerpts and provided and used from VMware Documentation and Sources..

Method 2: Provisioning New VMFS-5

This method explains how to update VMFS without performing an “online” upgrade. Essentially this would be the normal process of provisioning a VMFS LUN for ESXi 5 or older. Here are the listed benefits of VMFS-5 provisioning without doing an “online” upgrade.

  • VMFS-5 has improved scalability and performance.
  • VMFS-5 does not use SCSI-2 Reservations, but uses the ATS VAAI primitives.
  • VMFS-5 uses GPT (GUID Partition Table) rather than MBR, which allows for pass-through RDM files greater than 2TB.
  • Newly created VMFS-5 datastores use a single block size of 1MB.
  • VMFS-5 has support for very small files (<1KB) by storing them in the metadata rather than in the file blocks.
  • VMFS-5 uses sub-blocks of 8K rather than 64K, which reduces the space used by small files.
  • VMFS-5 uses SCSI_READ16 and SCSI_WRITE16 cmds for I/O (VMFS-3 used SCSI_READ10 and SCSI_WRITE10 cmds for I/O).

Other Enhancements:

  • Disk Alignment for Guest OS’s become transparent and have less impact.
  • Performance I/O and scalability become a greater value to running online vs. new.

As you can see the normal provisioning of VMFS-5 is a lot more robust in features and offers a great deal of improvement to just performing an “Online” upgrade. The online upgrade is easy and seamless but for normal considerations all benefits should be considered. In my case the chosen Method would be Method 2. The only instance in which an “Online” upgrade would be considered under normal circumstances would be if you were already at capacity on an existing array. In this type of scenario it could be viewed as a more beneficial way. Also, if you did not have Storage vMotion licensed through VMware further considerations on how to migrate to the new VMFS would have to be made. Migrating workloads to new VMFS-5 would be a bit more of a challenge in that case as well. However this is not an issue under most circumstances.

Method 2: How To provision new VMFS-5 for ESXi

  1. Connect to vSphere vCenter with vSphere Client
  2. Highlight a host and click the “Configuration” tab in the right pane.
  3. Click on “Storage”
  4. In the right pane click “Add Storage” (See image)
  5. Select the LUN you wish to add
  6. Expand the Name column to record the last four digits (this will be on the naa name) In this case it will be 0039. Click “Next”
  7. Select to use “VMFS-5” option
  8. Current Disk Layout – Click “Next”
  9. Name the datastore using abbreviations for the customers name with the type of storage followed by the LUN LDEV (Yes, a standard). This example would be “Cust-Name”=name “SAN”=type “01”= Datastore Number “LDEV” = 0038. (cus-nam-san-01-1234)
  10. Select the radio button “Maximum available space” click > Next
  11. Click Finish and watch for the “task” to complete on the bottom of vSphere client
  12. After the task completes go to the Home > Inventory > Datastores
  13. Make sure there is a Fiber Storage folder created. Under that folder create a tenant folder and relocate the datastores in the new tenant name folder.
  14. After moving the folder you may need to provision this datastore for vCloud. Proceed to the optional method for this below.

Note: Some of the information contained in this blog post is provided by VMware Articles and on their website http://www.vmware.com

Raw Device Mapping – RDM in vSphere 4 and 5 Resources and Experiences

RDM (Raw Device Mapping) has changed a little between VI and vSphere. They have continued to improve over time as well. Now there are a lot of resources out there for RDMs in general but I just wanted to do my own research and learn about the particulars. I also wanted to note some of the blogs I used and the difference between vSphere 4 and vSphere 5. I don’t think things have changed a whole lot for RDM’s other then the size of pRDM (Physical or Pass-thru) have been increased almost to 64TB.

Now RDM is more of a topic of “use case” than anything. Other things like “Performance” can be debated on all kinds of levels and usually there are always workarounds to avoiding using RDM’s. Now the real limitation to RDMs in my opinion is pretty much not being able to back it up with the vStorage API. That is my only real personal gripe. The size of an RDM can sometimes be more accommodating for large Databases and large file servers in some cases. Again this is “use case”. For the most part when I see RDMs implemented in our environment it was done based off of bad logic… meaning some people don’t understand the risk associated with what they are doing or wasn’t necessarily the best “use case” they intended it for. Mostly when I see that a pRDM is used a vRDM would’ve worked better.

A misconception: “RDM’s are more secure because you cannot snapshot”

I am not sure where I can start this discussion. This was what a colleague of mine told me around the first time I started learning vSphere 4 and I kind of believed him at first but he too was relatively new and I know being in a virtual environment there is a way around anything..almost.

This was a hot topic because we were virtualizing AD servers and they were arguing with the AD team about how they should implement AD on vSphere. The security team didn’t want anything to do with it and they didn’t want to go along with it at all. So the engineer(s) decided to throw out the RDM as a way to keep people from being able to snapshot AD so they could keep AD from blowing up (Misconception as well) and securing the data because you couldn’t just copy it off. What they were thinking was that if they cannot actually see the VMDK file then no one can steal or retrieve the data. What a misconception that is… there are many ways to convert any RDM into a VMDK file as long as it’s not over a particular size that is – 2TB – 512mb.

You can use a variety of tools from VMKFSTools, Cold Migrations, sVmotion (vRDM) and cloning. Of course there are limitations like snapshots and such which can prevent this from happening. What you need to note is that if you are using a RDM for a particular case make sure you DON’T accidently convert it to a VMDK file.

The following reference is from a KB located here:

Cold Migration

With file relocation:

  • Any non-RDM virtual disks are physically moved to the destination.
  • The virtual machine configuration files are physically moved to the destination.
  • Raw LUNs themselves cannot be moved, as they are raw disks presented from the SAN. However the pointer files (RDMs) can be relocated if required.
  • When performing a cold migration of a virtual machine with RDMs attached to it, the contents of the raw LUN mapped by the RDM are copied into a new .vmdk file at the destination, effectively converting or cloning a raw LUN into a virtual disk. This also applies when the virtual machine is not moving between ESX hosts. In this process, your original raw LUN is left intact. However, the virtual machine no longer reads or writes to it. Instead, the newly-created virtual disk is used.
  • If you wish to cold migrate a virtual machine without cloning or converting its RDMs, remove them from the configuration of the virtual machine before migrating. You can delete the RDM from the disk when removing it (the raw LUN contents are not changed). Re-add them to the configuration when completed.

Again other references can be noted in the KB. Also if you wanted to convert your RDM from Physical to virtual mode just follow this KB here. Also Scott Lowe did a great job writing about this similar subject in regards to svMotion here.

In short, I just wanted to know a particular reference and the misconception of RDM’s being secure because you can easily convert an RDM to a VMDK and VMDK to RDM.

I would like to thank VMware and all the VMware Bloggers if you know of any other good resources let me know and I’ll update it.

Resources vSphere 4.1:
http://www.vmware.com/pdf/vsphere4/r41/vsp_41_vm_admin_guide.pdf (pg.28)
http://www.vmware.com/pdf/vsphere4/r41/vsp_41_esxi_server_config.pdf (Lots of references to RDM)
http://www.virtuallifestyle.nl/2010/01/recommended-detailed-material-on-rdms/ (Lots of Good stuff)
http://www.virtualizationteam.com/virtualization-vmware/vmware-vi3-virtualization-vmware/dont-use-vmware-raw-device-mapping-rdm-for-performance-but.html (RDM performance Blog)
http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1005241 (Great resource for learning RDMs and Conversions)
http://www.vmware.com/files/pdf/performance_char_vmfs_rdm.pdf (RDM Performance Article)
http://blog.scottlowe.org/2010/08/18/storage-vmotion-with-rdms/ (Scott Lowe’s Blog on sVmotion RDM)

Resources vSphere 5:
http://pubs.vmware.com/vsphere-50/topic/com.vmware.ICbase/PDF/vsphere-esxi-vcenter-server-50-virtual-machine-admin-guide.pdf (pg. 39)
http://pubs.vmware.com/vsphere-50/topic/com.vmware.ICbase/PDF/vsphere-esxi-vcenter-server-50-storage-guide.pdf (pg. 140)

***Disclaimer: The thoughts and views expressed on VirtualNoob.wordpress.org  and Chad King in no way reflect the views or thoughts of his employer or any other views of a company. These are his personal opinions which are formed on his own. Also, products improve over time and some things maybe out of date. Please feel free to contact us and request an update and we will be happy to assist. Thanks!~