vSphere 5 – Storage pt.3 LUN Sizing – Why it matters..

Well, I guess I am on a roll this week. I feel like a lot of my themes have been around storage and VMware this week. I don’t think that is a bad thing but I am seeing some gaps out there as far as considerations and recommendations. My only point in this post is to share my thoughts for you and what you should consider when facing this after your vSphere 5 upgrade or after you install it. I have to wonder just how many enterprises out there have seriously pushed the envelope of LUN sizing in VMware. One has to think; “If you are carving up large LUNS does that mean your scaling up?”. There are so many implications one should consider when designing your storage. One of the more critical pieces is I/Ops and the cluster size and what your target workload is. With bigger LUNS this is something you have to consider and I do think it is common knowledge for the most part.

There are so many things one should consider when deciding on a LUN Size for vSphere 5. I sincerely believe VMware is putting us all in a situation of scaling up sometimes. With the limitations of SDRS and Fast Provisioning it has really got my mind thinking. It’s going to be hard to justify a design scenario of a 16 node “used to be” cluster when you are trying to make a call on if you really want to use some of these other features. Again, you have heard me says this before but I will say it again; it seems more and more that VMware is making a huge target of this to Small to Medium sized businesses but offering some features larger sized companies (with much bigger clusters) now have to invest even more time in reviewing their current designs and standards – Hey, that could be a good thing :) . Standards to me are a huge factor for any organization. That part seems to take the longest to define and some cases even longer to get other teams to agree to. I don’t think VMware thought about some of those implications but I am sure they did their homework and knew just were a lot of this was going to land…

With that being said I will stop my rambling on about these things and get to the heart of the matter or better yet heart of the storage.

So, After performing an upgrade I have been wondering what LUN size would work best. I believe I have some pretty tough storage and a solid platform (CISCO UCS) so we can handle some I/Ops. I wanted to share some numbers with you that I found was very VERY interesting. I have begun to entertain the notion of utilizing Thin Provisioning even further. However, we are all aware that VMware still has an issue with UNMAP command which I have pointed out in previous blogs (here). However being that I have been put between a rock and hard place I believe update 1 to vSphere 5 at least addressed 1/2 of my concern of it. The other 1/2 that didn’t was the fact that now I have to defer to a manual process that involves an outage to reclaim that Thin Provisioned space… I guess that is a problem I can live it with given the way we use our storage today. It doesn’t cause us to much of a pain, but it is a pain none the less.

Anyways, so here is my homework on LUN sizing and how to get your numbers (Estimates):
(Note: This is completely hypothetical and not related to any specific company or customer; this will also include Thin Provisioning and Thick)

  • Factor an Average IOps per LUN (if you can from your storage vendor or from vCenter or an ESXi host)

    Take the IOps per all production LUNS and divide it by the number of datastores

    Total # IOps / # of Datastores

  • Gather the average numbers of virtual machines per datastore

    Total # VM’s / # of Datastores

    Try to use Real World production virtual machines

  • Decide on the LUN Size and use your current baseline as a multiplication factor from your current.

    So if you want to use 10TB Datastores and you are using 2TB datastores you can take whatever numbers and

    10TB / 2TB = 5 (this is you multiplication factor for IOPs and VM:Datastore Ratio)

So now let’s use an example to put this to practical use… and remember to factor in free space for maintenance I always keep it at 10% free.

Let’s say we have a customer with the following numbers before:

16 VM’s per Datastore

1200 I/Ops Average per Datastore (we will have to account for peak to)

2TB Datastore LUNS

Now for the math (Lets say the customer is moving to 10TB LUNS so this would be a factor of 5):

16 x 5 = 80 VM’s per Datastore (Thick Provisioned)

120 x 5 = 6,000 IOps per Datastore…

Not bad at all, but now let’s seriously take a look at thin provisioning which is QUITE different on numbers. Let’s say we check our storage software and it tells us on average a 2TB LUN only really uses 500 GB of space for the 16 VM’s per Datastore. Lets go ahead and factor some room in here (10% for alerting and maintenance purposes this time around). You can also download RVTools to get a glimpse of actual VM usage versus provisioned for some thin numbers.

First off:

16 VM per 500GB so that times 4 for the 2TB LUN; Makes 64 Thin VMs per 2TB Datastore.

Times that by the new LUN size 9TB / by 2TB = 4.5 (minus 10% for reserved for alerting purposes and Maintenance; this could also be considered conservative)

64 x 4.5 = 288 Average VM Per 10TB Datastore (and that 1 TB reserved too!)

We aren’t done yet; here comes the IOPs and lets use 1500 IOPs. Since we times the VM’s by a factor of 4 we want to do this for the average of IOPs as well:

1500 x 4 = 6000 per 2TB LUN; Using thin provisioning on VMs

6000 x 4.5 = 27000 IOps per LUN.

So this leave use with the following numbers for thick and thin:

VM to 10TB Datastore ratios:

80 Thick

288 Thin

IOps to 10TB Datastore ratios:

6000/IOps Thick Provisioning

27000/IOps Thin Provisioning

So, I hope this brings to light some things you will have to think about when choosing a LUN size. Also note that this is probably more of a service provider type of scenario as we all know most may use a single 64TB LUN though I am not sure I would recommend that. It all comes down to use-case and how it can be applied. So this also begs to question what’s the point of some of those other features if you leverage Thin Provisioning. Here are some closing thoughts and things I would recommend:

  • Consider Peak loads for your design; the maximum IOps you may be looking for in some cases
  • Get an average/max per VM datastore ratio (locate your biggest Thin VM)
  • Consider tiered storage and how it could be better utilized
  • Administration and Management overhead; essentially the larger the LUN the less over all provisioning time and so on.
  • VAAI capable array for those Thin benefits (running that reclaim UNMAP script..)
  • Benchmark, Test using some other tools on that bigger LUN to ensure stability at higher IOps
  • Lastly the storage array benchmarks and overall design/implementation
  • The more VM you can scale on a LUN can affect your cluster design; You may not want to enable your customers to scale that much
  • Alerting considerations and how you will manage it efficiently to not be counterproductive.
  • Consider other things like SDRS (fast provisioning gets ridiculous with Thin Provisioning)
  • Storage latency and things like Queues can be a pain point.

I hope this helps some of those out there that have been wondering about some of this stuff. The LUN size for me dramatically affect my cluster design and what I am looking to achieve. You also want to load test your array or at least get some proven specs on the array. I currently work with HDS VSP arrays and these things can handle anything you can throw at them. They are able to add any type of additional capacity you need rather it be Capacity, IOps, Processing or what not you can easily scale it out or up. Please share your thoughts on this as well. Here are some great references:

http://www.yellow-bricks.com/2011/07/29/vmfs-5-lun-sizing/
http://serverfault.com/questions/346436/vmware-vmfs5-and-lun-sizing-multiple-smaller-datastores-or-1-big-datastore
http://communities.vmware.com/thread/334553
http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2014849 

Note: these numbers are hypothetical but its all in the numbers.

About these ads

About Cwjking

I am an IT professional working in the industry for over 10 years. Starting in Microsoft Administration and Solutions I was also a free lance consultant for small businesses. Since I first saw virtualization I have always been fascinated by the concept. I currently specialize in VMware technology. I consult daily on many different types of VMware Solutions. My current role is hands on administration, technical design, and consulting.

Posted on June 4, 2012, in Hardware, Maintenance, Storage, VMFS-5, vOperations, vSphere 4, vSphere 5 and tagged , , , , , , , , . Bookmark the permalink. 2 Comments.

  1. Which version of vSphere are you running? I though the latest update enables the UNMAPPED primitive.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

%d bloggers like this: