Category Archives: vSphere 4

Cloud Management – Turbonomics is Awesome!

Hello my friends, It has been quite a while since I last blogged but I wanted to take some time to share some of my experience over the past couple of years. I have had the opportunity to work with some great companies, people, and it has definitely been a very enlightening experience.

I had the privilege of being apart of a special project nearly 5 years ago which began my career in the cloud. I got to engineer and deploy one of the nations first ever GSA clouds which was a great experience. As time rolled on and cloud was adopted many things came into the light. Being a VMware savvy guy I really didn’t have all the time to spend learning all these new technologies which were directly competing. At this time, Amazon was getting big, VMware was about to release VRA, and the market stood still… or so it felt.

Microsoft had launched their On-Prem cloud and before we knew it we had to start getting serious about the cost of our delivery and compute. If you have never had the pleasure of working for service providers let me tell you – its all about cost. So we put Azure to the test, compared it, vetted it, did anything we could to ensure it could be operationally supported. It was a very interesting time and nice comparison to our existing IaaS architecture. We definitely had our work cut out for us.

Since then the challenges of hybrid cloud have become real. Although some vendors had good solutions like UCS Inter-Cloud Fab or vCloud Connector… (insert whatever else here) we always seemed to have unique enough requirements to disqualify it. Needless to say we still deployed, stood them up, tested them, and found great value it still wasn’t justifiable enough for us to warrant a change. Being a service provider isn’t about offloading to another cloud… it’s about how you can upsell your services and provide more value for customers.

As time grew on people adopted Cisco UCS into their infrastructures and eventually it seemed like updating and maintaining infrastructure became critical and the speed of delivery is only hindered by how fast we can adopt new offerings.. If we cannot seamlesly update, migrate, or refresh to new then what can we do?

“Its so old its not even supported!”
“Wow, no new firmware for 5 tears?!”
“Support for VMware has lapsed :(”

“Who cares?”

You can automate this pain away easily. Just because one vendor doesn’t support a feature or a new version does not mean you have to still burden your IT staff. If you can standardize operational processes between your cloud(s), Visibility, Integration, and Support – would you?

The biggest challenge is getting out of the old and into the new. Most legacy infrastructure runs on VMware and you can do this with Turbonomics and a variety of other tools. One of the benefits of going 3rd party is that you don’t have “lock-in” to any infrastructures or software. You can size it, optimize it, price it, and compare it to ensure things run as they should. Versioning, Upgrades, and these things will always be challenge but as long as you can ensure compliance, provisioning, optimization, and performance it won’t be an after thought. I found Turbonomics to always get the job done and always respond in a way that provided a solution and more than that… at a push of a button.

Some of the benefits:
– Agnostic Integration with a large set of vendors
– Automated Provisioning for various types of compute
– Easily retrofit existing infrastructure for migration
– Elastic compute models
– Cost Comparison, Pricing Existing, Etc…
– I.e. Amazon AWS, Azure
– Track and exceed your ROI Goals
– Eliminate Resource Contention
– Automate and Schedule Migrations between Compute Platforms (Iaas > DBaaS)
– Assured performance, control, and automated re-sizing
– Not version dependent and can be used in a wide variety of scenarios – I.e. I can elaborate if needed.
– Get rolling almost instantly with it…

5 years and I still think Turbonomics is a great product. I have used it extensively in the early days and also worked with it during the vCloud Integration piece. The free version is also amazing and very helpful. Spending time checking capacity, double checking data, ensuring things are proper and standard, all that stuff you can forget about it. Configure your clouds; private, public, or dedicated into Turbonomics quickly.

You just have to trust proven software especially if its been 7 years in the making and exceeds capabilities that most tools require significant configuration for. Also, always keep in mind that TURBONOMICS can learn your environment and the value of understanding the platform and providing insight can be huge.  You have to admit that some admins may not understand or know other platforms. This simplifies all that by simply understanding the workload and infrastructure that it runs on.

Other Great Information or References:
Cisco One Enterprise Suite – Cisco Workload Optimization Manager:
CWOM offered with
http://www.cisco.com/c/dam/en/us/solutions/collateral/data-center-virtualization/one-enterprise-suite/solution-overview-c22-739078.pdf
http://www.cisco.com/c/dam/en/us/solutions/collateral/data-center-virtualization/one-enterprise-suite/at-a-glance-c45-739098.pdf
https://www.sdxcentral.com/articles/news/cisco-launches-new-ucs-servers-hybrid-cloud-management-software/2017/07/

Turbonomics and BMC:
“Running it Red Hot with Turbonomics”
https://turbonomic.com/resources/videos/cloud-economics-on-prem-or-off-with-turbonomic-bmc/

 

Advertisements

vSphere 5 – Storage pt.3 LUN Sizing – Why it matters..

Well, I guess I am on a roll this week. I feel like a lot of my themes have been around storage and VMware this week. I don’t think that is a bad thing but I am seeing some gaps out there as far as considerations and recommendations. My only point in this post is to share my thoughts for you and what you should consider when facing this after your vSphere 5 upgrade or after you install it. I have to wonder just how many enterprises out there have seriously pushed the envelope of LUN sizing in VMware. One has to think; “If you are carving up large LUNS does that mean your scaling up?”. There are so many implications one should consider when designing your storage. One of the more critical pieces is I/Ops and the cluster size and what your target workload is. With bigger LUNS this is something you have to consider and I do think it is common knowledge for the most part.

There are so many things one should consider when deciding on a LUN Size for vSphere 5. I sincerely believe VMware is putting us all in a situation of scaling up sometimes. With the limitations of SDRS and Fast Provisioning it has really got my mind thinking. It’s going to be hard to justify a design scenario of a 16 node “used to be” cluster when you are trying to make a call on if you really want to use some of these other features. Again, you have heard me says this before but I will say it again; it seems more and more that VMware is making a huge target of this to Small to Medium sized businesses but offering some features larger sized companies (with much bigger clusters) now have to invest even more time in reviewing their current designs and standards – Hey, that could be a good thing 🙂 . Standards to me are a huge factor for any organization. That part seems to take the longest to define and some cases even longer to get other teams to agree to. I don’t think VMware thought about some of those implications but I am sure they did their homework and knew just were a lot of this was going to land…

With that being said I will stop my rambling on about these things and get to the heart of the matter or better yet heart of the storage.

So, After performing an upgrade I have been wondering what LUN size would work best. I believe I have some pretty tough storage and a solid platform (CISCO UCS) so we can handle some I/Ops. I wanted to share some numbers with you that I found was very VERY interesting. I have begun to entertain the notion of utilizing Thin Provisioning even further. However, we are all aware that VMware still has an issue with UNMAP command which I have pointed out in previous blogs (here). However being that I have been put between a rock and hard place I believe update 1 to vSphere 5 at least addressed 1/2 of my concern of it. The other 1/2 that didn’t was the fact that now I have to defer to a manual process that involves an outage to reclaim that Thin Provisioned space… I guess that is a problem I can live it with given the way we use our storage today. It doesn’t cause us to much of a pain, but it is a pain none the less.

Anyways, so here is my homework on LUN sizing and how to get your numbers (Estimates):
(Note: This is completely hypothetical and not related to any specific company or customer; this will also include Thin Provisioning and Thick)

  • Factor an Average IOps per LUN (if you can from your storage vendor or from vCenter or an ESXi host)

    Take the IOps per all production LUNS and divide it by the number of datastores

    Total # IOps / # of Datastores

  • Gather the average numbers of virtual machines per datastore

    Total # VM’s / # of Datastores

    Try to use Real World production virtual machines

  • Decide on the LUN Size and use your current baseline as a multiplication factor from your current.

    So if you want to use 10TB Datastores and you are using 2TB datastores you can take whatever numbers and

    10TB / 2TB = 5 (this is you multiplication factor for IOPs and VM:Datastore Ratio)

So now let’s use an example to put this to practical use… and remember to factor in free space for maintenance I always keep it at 10% free.

Let’s say we have a customer with the following numbers before:

16 VM’s per Datastore

1200 I/Ops Average per Datastore (we will have to account for peak to)

2TB Datastore LUNS

Now for the math (Lets say the customer is moving to 10TB LUNS so this would be a factor of 5):

16 x 5 = 80 VM’s per Datastore (Thick Provisioned)

120 x 5 = 600 IOps per Datastore…

Not bad at all, but now let’s seriously take a look at thin provisioning which is QUITE different on numbers. Let’s say we check our storage software and it tells us on average a 2TB LUN only really uses 500 GB of space for the 16 VM’s per Datastore. Lets go ahead and factor some room in here (10% for alerting and maintenance purposes this time around). You can also download RVTools to get a glimpse of actual VM usage versus provisioned for some thin numbers.

First off:

16 VM per 500GB so that times 4 for the 2TB LUN; Makes 64 Thin VMs per 2TB Datastore.

Times that by the new LUN size 9TB / by 2TB = 4.5 (minus 10% for reserved for alerting purposes and Maintenance; this could also be considered conservative)

64 x 4.5 = 288 Average VM Per 10TB Datastore (and that 1 TB reserved too!)

We aren’t done yet; here comes the IOPs and lets use 1500 IOPs. Since we times the VM’s by a factor of 4 we want to do this for the average of IOPs as well:

1500 x 4 = 6000 per 2TB LUN; Using thin provisioning on VMs

600 x 4.5 = 2700 IOps per LUN.

So this leave use with the following numbers for thick and thin:

VM to 10TB Datastore ratios:

80 Thick

288 Thin

IOps to 10TB Datastore ratios:

6000/IOps Thick Provisioning

2700/IOps Thin Provisioning

So, I hope this brings to light some things you will have to think about when choosing a LUN size. Also note that this is probably more of a service provider type of scenario as we all know most may use a single 64TB LUN though I am not sure I would recommend that. It all comes down to use-case and how it can be applied. So this also begs to question what’s the point of some of those other features if you leverage Thin Provisioning. Here are some closing thoughts and things I would recommend:

  • Consider Peak loads for your design; the maximum IOps you may be looking for in some cases
  • Get an average/max per VM datastore ratio (locate your biggest Thin VM)
  • Consider tiered storage and how it could be better utilized
  • Administration and Management overhead; essentially the larger the LUN the less over all provisioning time and so on.
  • VAAI capable array for those Thin benefits (running that reclaim UNMAP script..)
  • Benchmark, Test using some other tools on that bigger LUN to ensure stability at higher IOps
  • Lastly the storage array benchmarks and overall design/implementation
  • The more VM you can scale on a LUN can affect your cluster design; You may not want to enable your customers to scale that much
  • Alerting considerations and how you will manage it efficiently to not be counterproductive.
  • Consider other things like SDRS (fast provisioning gets ridiculous with Thin Provisioning)
  • Storage latency and things like Queues can be a pain point.

I hope this helps some of those out there that have been wondering about some of this stuff. The LUN size for me dramatically affect my cluster design and what I am looking to achieve. You also want to load test your array or at least get some proven specs on the array. I currently work with HDS VSP arrays and these things can handle anything you can throw at them. They are able to add any type of additional capacity you need rather it be Capacity, IOps, Processing or what not you can easily scale it out or up. Please share your thoughts on this as well. Here are some great references:

http://www.yellow-bricks.com/2011/07/29/vmfs-5-lun-sizing/
http://serverfault.com/questions/346436/vmware-vmfs5-and-lun-sizing-multiple-smaller-datastores-or-1-big-datastore
http://communities.vmware.com/thread/334553
http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2014849 

Note: these numbers are hypothetical but its all in the numbers.

vSphere – vCloud – Fast Provisioning – My Thoughts…

Yea, some would say this post is probably overdue but lately I have sincerely been thinking. Have we been drinking some Kool-Aid around this feature? I couldn’t help but have some concerns around possible implementation of this feature in VCD installments. I in particular, am not sold on it completely. Here are just some quick reasons for me that didn’t exactly sell me.

  1. It’s a very “new” feature in regards to VCD which is still early in its years as a cloud platform.
  2. No way of currently updating those linked clones unlike VMware View. (some admin over head as well as using local and shared catalogs)
  3. Added complexity (with linked images, snap chains, and how you have handle storage motion)
  4. By Default ALL linked clone images are mis-aligned. (VMware has yet to address this problem) In some cases this could be a compounding factor causing some additional I/O overhead.
  5. Design has to be highly considered and evaluated with a max of 8 node clusters (This will affect current installments as well)

So yeah, I know I look like the bad guy but I seriously think this release was just a target more to SMB than anything. IMO, this is more like a feature for those of smaller businesses because now they don’t have to go out and spend all that crazy dough on a VAAI capable array (Hooray for them :)) which begs to question….

Why do you need to enable this feature if you already leverage VAAI capable arrays?

It just seems to me that Fast Provisioning is a little pre-mature in its release. Although VCD continues to improve I think this features needs some serious improving before some bigger shops may decide to utilize it. The other down is that we have yet to see any real progress on the UNMAP problem and it’s now treated as a manual task we should run during certain times… or outages I should say. That really blows because we all know what kinds of benefits and problems thin provisioning on some array can cause. For the most part, it’s just really bad reporting… lol.

Here are some other sources I would recommend reading and I seriously think you should read them and learn for yourself if it’s really worth it. Also, be careful not to put the cart before the OX and do your homework. Some people drink the kool-aid and don’t think to question or ask “What’s really under the hood?”. Fast Provisioning should never be compared to VMware View… It’s similar but not identical.. I would definitely recommend reading Nick’s blog it opened my eyes to what he calls the “Fallacies” and of course Chris has a good read.

http://datacenterdude.com/vmware/vcd-fast-provisioning-vaai-netapp/
http://www.chriscolotti.us/vmware/info-vcloud-director-fast-provisioned-catalog-virtual-machines/
http://www.kendrickcoleman.com/index.php?/Tech-Blog/vcloud-director-15-features-that-effect-limitation-and-design.html

vSphere – Networking – ESXi Single NIC VDS Management Migration

Well, I wasn’t sure how to name this blog as VMware continues to use all kinds of different lingos for all of their bells and whistles. I had the unique opportunity to begin working with migrating management interfaces or also know as vmkernel interfaces around from VSS to the DVS switching. This present a lot of struggles but it seems to me that VMware has really improved this functionality in the later versions of vSphere. I recall running into many kinds of issues when doing this on 4.0. So far using a vCenter 5 server with a mix of 4.1 and 5.0 host testing has proved to be seamless and non-interruptive. However, I would still highly recommend considering all your options and testing this method THOROUGHLY before ever touching production environments.

I was able migrate a single physical NIC running ESXi management from a VSS to a VDS. This video covers how I did that. The reason for the video was because I got all kinds of senseless google links when trying to search for something documented. So, I did myself a favor and published one.

Remember, this is a test and this is only applicable for me to use in a few environments. In most cases I use redundant NICs. Now the real kicker about this is that to migrate from a VDS to a VSS requires a bit more thinking and planning. Especially if you only got access to a single PNIC. Maybe I will cover that some other time… for now try to use two. Also, this may be a solution for environments running single 10GB and need to use PVLANS or centralize managment.

Up and Coming – CommVault Round 3…?

No way…

Yes, you read it right. Be expecting more on this to come over the next couple of weeks as I learn more. I will get to play with it in some capacity so maybe I can share some good stuff. I am going to be covering some things around considerations and designs – hopefully. We all know that this was a hot topic a while back but now that the dust has settled maybe I can have some progress. CommVault has made some pretty strong improvements over the past couple of months. They have covered some of the pain points I had with it in a pretty good amount of time. I don’t think this would’ve happened had some folks not made a point to show some of these things openly. My number one gripe back in the day was how CommVault NOR VEEAM had “TRUE” vCloud Director compatibility – IN MY OPINON (all caps shows my emphasis). I am sure that by now this and maybe a few other things have changed. I am not sure I will be doing any comparisons per se but it will be good to see how the latest greatest stacks up. I think by now I can easily say I know what industry standards look like for VMware backups and the expectations and performance should be. One thing you can take to the bank is that I plan to do my home work just like last time. If there is something I don’t like or think needs improved I will most certainly write about it. All the folks out there that read this blog need to understand there isn’t a lot of information out there around some of these topics I cover. I would encourage any person reviewing a CommVault solution to do your homework. There are a lot of things to consider when going with a backup product. Let’s hope this time around I don’t have to pull any punches…

vSphere 5 – Storage pt.2 vCloud and Vsphere Migrations

The point..

So on my last post I covered some things to think on when looking at the new VMFS-5 partitions. Obviously the point in moving to the new VMFS would be to gain all the benefits as explained in that previous post. One thing you will see in this post are just the types of migrations. I also want to highlight that I shared some resources on the bottom for those of you who may want to review some deeper highlights. Obviously there isn’t a ton of documentation out there highlighting this nor the special *features* for vSphere 5 (sVmotion issues??) that you may run into. So let hope I do this yet further justice. On to the blog!

Adding VMFS-5 to the vCloud

  1. Log in to vSphere and ensure you have a new LUN provisioned (covered above in how to:)
  2. Log into vCloud Director Web Interface and you must be an administrator.
  3. Click “System” tab and click on Provider VDC. Right click a PVDC and select “Open”
  4. After opening the PVDC select the Datastores Tab and then click the +/- button to add/remove datastores

  1. Browse through the datastores by clicking the > button or by searching in the top right. When you have located your datastore highlight it and then click the button then click “OK”. Disregard the warning.


(Note: the yellow highlights are ways you can search and browse through datastores. This is very handy when there are many to look through)


(Note: Highlight in yellow shows the datastore added successfully. This is a 20TB Datastore)

You will now see the datastore in the datastore summary tab for that PVDC

Migrating Virtual Machines for vCloud Director to the “new” VMFS-5 LUN.

  1. Make sure the vApp is NOT a linked clone. If it is a linked clone defer to the references below.
  2. Ensure the Datastore you want to Storage Motion the Virtual Machine to is also provisioned to the Org VDC. Do this by opening the Org vDC and selecting the “Datastores” Tab.

    Note: you can see both datastores are attached to this VDC with the organization known as App1

  3. You could then log-in to vSphere client with the following noted vCenter and perform a storage vMotion. Another way of doing a Storage vMotion could be by using William Lam’s script he wrote as well. (see references below)
  4. If you need to perform the sVmotion defer to the following method below.

NOTE: I would highly recommend that you roll out update 1 to all vCloud components. This addresses a few major fixes that will allow for operations to run more smoothly. More importantly, the only way to sVmotion vCloud VMs is to turn them off. This is a pretty common issue with vanilla vsphere 5/vcloud 1.5 roll outs. I also experienced this problem. For more information please see references at the bottom.

Migrate a Virtual Machine with Storage VMotion in vSphere

Use migration with Storage VMotion to relocate a virtual machine’s configuration file and virtual disks while the virtual machine is powered on. You cannot change the virtual machine’s execution host during a migration with Storage VMotion. (Note: that if VM is managed by vCloud and not at 1.5 update 1 you will need to possibly power off the virtual machine to perform the svmotion. If the virtual machine is a fast provisioned vm (linked clone) then you will need to perform the sVmotion through an API.

Procedure

  • Ensure you are not moving vCloud vApp if you are please follow the above process first.
  • Display the virtual machine you want to migrate in the inventory.
  • Right-click on the virtual machine, and select Migrate from the pop-up menu.
  • Select Change datastore and click Next.
  • Select a resource pool (the same) and click Next.
  • Select the destination datastore:
    To move the virtual machine configuration files and virtual disks to a single destination, select the datastore and click Next.
    To select individual destinations for the configuration file and each virtual disk, click Advanced. In the Datastore column, select a destination for the configuration file and each virtual disk, and click Next.
  • Select a disk format and click Next:
  • Option Description
    Same as Source Use the format of the original virtual disk.
    If you select this option for an RDM disk in either physical or virtual
    compatibility mode, only the mapping file is migrated.
    Thin provisioned Use the thin format to save storage space. The thin virtual disk uses just as
    much storage space as it needs for its initial operations. When the virtual disk
    requires more space, it can grow in size up to its maximum allocated capacity.
    This option is not available for RDMs in physical compatibility mode. If you
    select this option for a virtual compatibility mode RDM, the RDM is
    converted to a virtual disk. RDMs converted to virtual disks cannot be
    converted back to RDMs.

    Thick Allocate a fixed amount of hard disk space to the virtual disk. The virtual
    disk in the thick format does not change its size and from the beginning
    occupies the entire datastore space provisioned to it.
    This option is not available for RDMs in physical compatibility mode. If you
    select this option for a virtual compatibility mode RDM, the RDM is
    converted to a virtual disk. RDMs converted to virtual disks cannot be
    converted back to RDMs.

    NOTE: Disks are converted from thin to thick format or thick to thin format only when they are copied from one
    datastore to another. If you choose to leave a disk in its original location, the disk format is not converted, regardless of the selection made here.

  • Review the page and click Finish.
  • A task is created that begins the virtual machine migration process.

References:

Linked Clones:
http://www.virtuallyghetto.com/2012/04/scripts-to-extract-vcloud-director.html
http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1014249

Storage Motion Issue:
http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2012122

How To’s sVmotion CLI/VCO style:
http://www.virtuallyghetto.com/2012/02/performing-storage-vmotion-in-vcloud.html
http://www.virtuallyghetto.com/2012/02/performing-storage-vmotion-in-vcloud_19.html
http://geekafterfive.com/2012/03/06/vcloud-powercli-svmotion/
http://geekafterfive.com/tag/vcloud/
http://pubs.vmware.com/vsphere-50/topic/com.vmware.ICbase/PDF/vsphere-esxi-vcenter-server-501-virtual-machine-admin-guide.pdf

Storage Considerations for vCloud:
http://www.vmware.com/files/pdf/techpaper/VMW_10Q3_WP_vCloud_Director_Storage.pdf

VMware vSphere Labs – Infrastructure – Setting Up Active Directory on Windows 2008 R2

This Tutorial runs through a quick overview of installing Active Directory 2008 R2 on a Windows Virtual Machines running in VMware Workstation 8. It has a Video and general instructions to help you out. Enjoy!

  1. Deploy from the template
  2. Configure NICS Static
  3. Disable Extra NIC
  4. Gateway and DNS are the Gateway list in “Virtual machine Editor”
  5. Keep DNS as the secondary DNS of the Domain Controller
  6. Rename machine to appropriate Computer Name to reflect your Domain Controller (sysprep gives silly names)
  7. Reboot
  8. Add Role from server manager
  9. Select Active Directory Domain Services
  10. Yes, Install the .Net Stuff….
  11. Run DCPRomo.exe from powershell or within the server manager under AD role
  12. Install DNS (if not you must be doing something a bit more advanced :))
  13. Reboot and validate you can log into AD with a Domain Account.
  14. Join another Virtual Machine to the Domain

VMware vSphere – Networking Best Practices – Networking 101 pt.2

Okay Experts take it easy on me…

As you know I have been writing various post around building your VMware Workstation Lab. One of the key points that I am trying to drive during this lab build is how to get your environment as closely matching a production environment as you can. Obviously networking is a very broad subject especially when it comes to implementing a production vSphere environment. I am going to attempt to do this topic justice by sharing some key points that should help you understand more of how you should design your network. I am not going to attempt to do what Kendrick Coleman does (because his designs are solid). I am only going to provide talking points and recommendations about traffic separation, why it should be separated, and details. Also, keep in mind that the most important factor of networking is using N+1 or not to use N+1. I will say that it is highly recommended from VMware to have your physical networking N+1 so you can benefit further from High-Availability. So let’s get started with the traffic types.

Traffic Types:

  1. Management (High Availability)
  2. vMotion
  3. Fault Tolerance (Not in all cases)
  4. VM Networks
  5. Backup (Not in all cases)
  6. Storage/NAS (Depends on the type)

Note: backup and Storage say depends because in some cases you may or may not have iSCSI/NAS storage or be running backups for your virtual machines, Especially if you use a product like Veeam or CommVault. Fault tolerance isn’t really used and I believe that even when it does get better it still may not be worth it, considering all the bigger workloads and cost in licensing as well. Here are my recommendations and best practices I follow for dedicating traffic:

  1. Management: If possible: VLAN it, Separate the traffic (to a different switch), Use teaming or a single Nic (if you set up a MGMT kernel on another port group), You can run/share traffic with vMotion, Fault Tolerance, Backup, and Storage NAS. If you do share traffic use some sort of QOS or Network I/O control. BE mindful that running management with all this traffic isn’t recommended but this would provide you a way to run all this traffic over a separate switch a part from production VM traffic. If you have plenty of NICs then you can run it over the VM production network (but you don’t want to expose it to that network) but you must somehow separate it with a different subnet or VLAN. Most cases I see vMotion and MGMT being shared with Fault Tolerance (FT with big 10GB networks). Your NIC teaming should use explicit failover and over-ride so your vMotion/FT traffic will go over a seperate interface then your management traffic.
  2. vMotion-FT-Backup-Storage-NAS: L2 traffic, hopefully doesn’t have to be routed, in most cases I see this and management traffic being shared, especially with 10GB. vMotion+FT+Backup+NAS if you don’t have a ton of connections. On this particular set up it would be good to setup Jumbo Frames. This traffic you wouldn’t want running over production if possible so a dedicated switch would be really good, also VMware recommends using a dedicated storage switch anyways.
  3. VM Networks: I usually dedicate two NICs for VM production traffic and usually create separate port groups for each type of VM related traffic. In some cases you may have a customer who requires separating this out over different NICs. Again this is just one of those you have to look at based on requirements at that time. Normally the ladder is good enough.
  4. Storage/NAS and Backup: In most cases businesses may have their own backup network. You could run storage and backup traffic over those switches if you choose. In that case, you mines of well also run vMotion and FT.

The Switches and considerations:

You usually want 2 types of switches if that is all you can do. In some cases if you go 4 that would be even better because then you can look at N+1. Where you can try to separate the big traffic from the little traffic (Management). If you cannot separate it by using dedicated switches then use QOS or NIOC to control the traffic. Managed switches can be expensive, just remember in vSphere 5 support for LLDP came into being so you don’t have to worry about buying CISCO stuff to get that CDP information. If you do not plan on using a Converged Network Architecture (FCoE) then be sure to buy enough 1GB Nics. These things are cheap you can load them up even if you may not use them. Things like migrations and stuff come up and if you only buy what you need you’ll end up robbing Peter and paying Paul.

This is really just a quick overview and recommendations. Unfortunately we only have what we are given in most cases. We also work off budgets.  I am going to cover some lab exercies that break this down even further.  General Stuff… I hope you enjoy it and I am sure I am going to be updating it as well.

Cheers,
Cwjking

VMware vSphere Labs – Foundations – Installing VMware Workstation 8 Custom

This video simply depicts how to install VMware workstation 8. It’s nothing really advanced it just covers a more thoughtful way of install VMware workstation 8.

The only thing really important to note is that VMware Workstation 8 installs windows services as part of the installation. What this means is that if you do install it on a separate drive it will need to be a part of a back up in order to do a full system restore. Meaning that you will have to do an image of that other drive. When using windows 7 of course. Remember this is a simple approach and may not cover as deep as you may want it too.

***Disclaimer: The thoughts and views expressed on VirtualNoob.wordpress.com and Chad King in no way reflect the views or thoughts of his employer or any other views of a company. These are his personal opinions which are formed on his own. Also, products improve over time and some things maybe out of date. Please feel free to contact us and request an update and we will be happy to assist. Thanks!~

VMware vSphere Labs – Foundations – First Series

Well, I have decided to dub my basic intro into VMware workstation labs as “Foundations” . I, like many others, enjoy discussing and learning about everything. Storage, networking, what I want to achieve, what I am designing for, name a few things you will have to consider in your lab. Sure, there is the easy stand up a lab slap some storage on it, run ESXi, Build vCenter, but for the few, the proud, and the pros… we like to cover it all. This series is pretty much going to go through every bit of that. Yeah, every bit… even the crumbs from the table. So here is the outline and obviously post videos and notes on each. Duly note, that at any time I may add a few dozen more post to foundations as I embark on this journey. I am looking forward to it and I hope you do as well! (Perhaps when I get to it I will do some CommVault vs. Veeam videos when I get a chance – OH, the drama!)

  1. The different kinds
  2. The Downloads and what you need to know
  3. VMware Workstation Storage Considerations
  4. Networking Considerations and Design
  5. Installing Custom VMware Workstation 8
  6. Creating you windows 2008 R2 template VM in VMware Workstation 7 and 8

Yeah, I know who would’ve ever thought a lab took this much thought. It’s just good stuff to think about and if people are board well you got something to do or watch. By the way, some videos have some music others don’t. Again feedback always appreciated!

***Disclaimer: The thoughts and views expressed on VirtualNoob.wordpress.com and Chad King in no way reflect the views or thoughts of his employer or any other views of a company. These are his personal opinions which are formed on his own. Also, products improve over time and some things maybe out of date. Please feel free to contact us and request an update and we will be happy to assist. Thanks!~

%d bloggers like this: