Advertisements

Blog Archives

VMware vSphere – Networking Best Practices – Networking 101 pt.2

Okay Experts take it easy on me…

As you know I have been writing various post around building your VMware Workstation Lab. One of the key points that I am trying to drive during this lab build is how to get your environment as closely matching a production environment as you can. Obviously networking is a very broad subject especially when it comes to implementing a production vSphere environment. I am going to attempt to do this topic justice by sharing some key points that should help you understand more of how you should design your network. I am not going to attempt to do what Kendrick Coleman does (because his designs are solid). I am only going to provide talking points and recommendations about traffic separation, why it should be separated, and details. Also, keep in mind that the most important factor of networking is using N+1 or not to use N+1. I will say that it is highly recommended from VMware to have your physical networking N+1 so you can benefit further from High-Availability. So let’s get started with the traffic types.

Traffic Types:

  1. Management (High Availability)
  2. vMotion
  3. Fault Tolerance (Not in all cases)
  4. VM Networks
  5. Backup (Not in all cases)
  6. Storage/NAS (Depends on the type)

Note: backup and Storage say depends because in some cases you may or may not have iSCSI/NAS storage or be running backups for your virtual machines, Especially if you use a product like Veeam or CommVault. Fault tolerance isn’t really used and I believe that even when it does get better it still may not be worth it, considering all the bigger workloads and cost in licensing as well. Here are my recommendations and best practices I follow for dedicating traffic:

  1. Management: If possible: VLAN it, Separate the traffic (to a different switch), Use teaming or a single Nic (if you set up a MGMT kernel on another port group), You can run/share traffic with vMotion, Fault Tolerance, Backup, and Storage NAS. If you do share traffic use some sort of QOS or Network I/O control. BE mindful that running management with all this traffic isn’t recommended but this would provide you a way to run all this traffic over a separate switch a part from production VM traffic. If you have plenty of NICs then you can run it over the VM production network (but you don’t want to expose it to that network) but you must somehow separate it with a different subnet or VLAN. Most cases I see vMotion and MGMT being shared with Fault Tolerance (FT with big 10GB networks). Your NIC teaming should use explicit failover and over-ride so your vMotion/FT traffic will go over a seperate interface then your management traffic.
  2. vMotion-FT-Backup-Storage-NAS: L2 traffic, hopefully doesn’t have to be routed, in most cases I see this and management traffic being shared, especially with 10GB. vMotion+FT+Backup+NAS if you don’t have a ton of connections. On this particular set up it would be good to setup Jumbo Frames. This traffic you wouldn’t want running over production if possible so a dedicated switch would be really good, also VMware recommends using a dedicated storage switch anyways.
  3. VM Networks: I usually dedicate two NICs for VM production traffic and usually create separate port groups for each type of VM related traffic. In some cases you may have a customer who requires separating this out over different NICs. Again this is just one of those you have to look at based on requirements at that time. Normally the ladder is good enough.
  4. Storage/NAS and Backup: In most cases businesses may have their own backup network. You could run storage and backup traffic over those switches if you choose. In that case, you mines of well also run vMotion and FT.

The Switches and considerations:

You usually want 2 types of switches if that is all you can do. In some cases if you go 4 that would be even better because then you can look at N+1. Where you can try to separate the big traffic from the little traffic (Management). If you cannot separate it by using dedicated switches then use QOS or NIOC to control the traffic. Managed switches can be expensive, just remember in vSphere 5 support for LLDP came into being so you don’t have to worry about buying CISCO stuff to get that CDP information. If you do not plan on using a Converged Network Architecture (FCoE) then be sure to buy enough 1GB Nics. These things are cheap you can load them up even if you may not use them. Things like migrations and stuff come up and if you only buy what you need you’ll end up robbing Peter and paying Paul.

This is really just a quick overview and recommendations. Unfortunately we only have what we are given in most cases. We also work off budgets.  I am going to cover some lab exercies that break this down even further.  General Stuff… I hope you enjoy it and I am sure I am going to be updating it as well.

Cheers,
Cwjking

Advertisements

VMware vSphere Labs – Foundations – First Series

Well, I have decided to dub my basic intro into VMware workstation labs as “Foundations” . I, like many others, enjoy discussing and learning about everything. Storage, networking, what I want to achieve, what I am designing for, name a few things you will have to consider in your lab. Sure, there is the easy stand up a lab slap some storage on it, run ESXi, Build vCenter, but for the few, the proud, and the pros… we like to cover it all. This series is pretty much going to go through every bit of that. Yeah, every bit… even the crumbs from the table. So here is the outline and obviously post videos and notes on each. Duly note, that at any time I may add a few dozen more post to foundations as I embark on this journey. I am looking forward to it and I hope you do as well! (Perhaps when I get to it I will do some CommVault vs. Veeam videos when I get a chance – OH, the drama!)

  1. The different kinds
  2. The Downloads and what you need to know
  3. VMware Workstation Storage Considerations
  4. Networking Considerations and Design
  5. Installing Custom VMware Workstation 8
  6. Creating you windows 2008 R2 template VM in VMware Workstation 7 and 8

Yeah, I know who would’ve ever thought a lab took this much thought. It’s just good stuff to think about and if people are board well you got something to do or watch. By the way, some videos have some music others don’t. Again feedback always appreciated!

***Disclaimer: The thoughts and views expressed on VirtualNoob.wordpress.com and Chad King in no way reflect the views or thoughts of his employer or any other views of a company. These are his personal opinions which are formed on his own. Also, products improve over time and some things maybe out of date. Please feel free to contact us and request an update and we will be happy to assist. Thanks!~

vSphere Experience 101 – Beginnings and VDI

Welcome to my world of vSphere. Here I will be sharing my experiences with vSphere in all the challenges we have faced with the various implementations of it. Being apart of one of the largest infrastructures in the world we have both learned from our mistakes and have learned the importance of “best practices” when working within a VI (Virtual Infrastructure) environment. As with most large enterprise we started first using MS Virtual Servers as our first implementation and then begin using VMware very little. Being a MS shop proved to be beneficial when it came to licensing and such so that is probably why we began using it. Oh those were the days…

After much testing in the lab I noticed that the virtual environment began to grow dramatically. We soon had clusters in our lab testing and working with various implementations. The MS Virtual Servers sat in the racks still running the 3 or 4 VM’s that they had always ran in the past even though the solution never really grew much. Hyper-V soon came out but after seeing the maturity of VMware over Hyper-V the choice still seemed pretty obvious. testing still went on and slowly bled out into our production environment. We had some clusters of Dell PE2900’s and R900s that were being used for some of our Homeoffice Data Center servers. Finally after slow implementation in the third quarter of 2008 I got to experience the first large implementation of VI3 in our environment.

The first large-scale implementation used DELL PEM1000 Blade Systems fully populated with M600s. Attached for storage with Equal Logix iSCSI arrays. Each host had 4 NICS and 32 GB of RAM (which obviously at the time 4 NIC seemed plenty but ended up to little) This was to begin our first Desktop Virtualization implementation using ESX 3.5 update 4 or VDI (Virtual Desktop Infrastructure).

Being the first initiative to help cut down large cost associated with vendors using desktops sprawled all throughout our technical center. You could walk the isles of our tech center and see anywhere from 1-12 desktops stacked up that various technical teams used for other vendors or even themselves for remoting, support, and developmental services. When it was all said and done I am sure the number of systems went well over a few thousand. That is a huge amount of overhead when you even look at the power cost alone. This VDI solution was the remedy to this problem but also being the first implementation didn’t make it our best. Little did we know about the risk of “disk alignment” and the effects of it when using the iSCSI arrays with our VDI environment (Virtual Desktop Infrastructure). For the brokers aka Gateway servers we used Citrix SSL – Net Scalers, and from there we managed various applications that were streaming to the VDI enviroment and other TC’s (thin cients).

All users were setup and manged from the Citrix broker servers. We already had licensing for citrix purchased so this made sense to simply utilize what we have for this portion of the VDI environment. One other problem we saw with this implementation was that the engineers didn’t plan HA very well in the layout for host vs. blade chassis. Instead of spreading out the host between blade chassis they put all 16 host on a single blade chassis. This could prove to be a huge impact in the event that an entire blade chassis went down – it would affect an entire DRS/HA cluster. Also due to the “disk alignment” issue we had to perform a storage migration to remedy this problem and was an extensive project. This was a large-scale project which included many hours of labor all which could have been avoided if proper thought and research was performed. The real downer was that this had ruined the chance of iSCSI to prove itself as a P.O.C. (Proof of Concept).

Lesson learned and well for that matter. Now we are beginning our 2nd phase of our VDI environment and we are using High Performance NetAPP Arrays using NFS based storage. We still use the old Dells but currently use HP BL class systems. The new host had 72 GB RAM 2 Quad Core processors and 6 NICs – which I still believe would’ve been just worth doing 2x10GbE. The old iSCSI Equallogix storage is currently unused and the DELL Blades are also utilizing the NetAPP NFS as well. The second phase is using vSphere 4 and we are also using the PSA for NetAPP which is quite nice. Though we are not using all the features of the NetAPP storage solution the performance has been much more dramatic than the iSCSI stuff from the past. Of course good planning and engineering is always a good thing. This time we spread out the host between blade chassis. There are approximately 16 host per DRS/HA cluster and we placed 4/16 of those host on separate chassis. So in the event of a chassis failure or network issue we would only lose 4 host tops. Currently we manage failover capacity by %.

Though the VDI is still relatively new to us we are still learning as we go along about the Do’s and Don’t. One thing is for certain is that we will have to get all our 3.5 host to 4.1. Our new environment is being used for Dynamic Desktop Virtualization. Its getting closer each day and we finally got to give it a run when VMware View 4.5 was released. So far we have liked what we have seen. Though I think we will still use Citrix for profile management/apps and MS App-V in the future.

***Disclaimer: The thoughts and views expressed on VirtualNoob.wordpress.org  and Chad King in no way reflect the views or thoughts of his employer or any other views of a company. These are his personal opinions which are formed on his own. Also, products improve over time and some things maybe out of date. Please feel free to contact us and request an update and we will be happy to assist. Thanks!~

HP Server Systems and Considerations Part 1

The other day I had the liberty of spending some time researching various things like hardware platforms to run an vSphere 4.0/4.1 environment.   Seeing how licensing is changing to more of a per socket basis and by the VM these days I have been thinking about platforms.

I have worked with various hardware platforms from previous experiences over the years but I have to say I have always liked HP Hardware.  Currently we are working with their current line of Blade Systems and the DL-280’s out in remote sites. After noting how licensing has changed to a more per socket approach I was considering the larger systems like the DL580’s and 980’s this week to save on licensing and have a High Computing environment for which to P2V anything and everything in your environment.

My Obsession first being the DL-980 but then I turned to the DL-580 due to a little less cost and good performance.  Ultimately I would probably go with the DL-980 but the DL-580 still roXXers in my opinion. With the storage bandwidth and multi-DUAL-port 10GB cards this server can really hit the storage hard and fast… So far in the virtual world we are seeing these pains…

More to come…

***Disclaimer: The thoughts and views expressed on VirtualNoob.wordpress.org  and Chad King in no way reflect the views or thoughts of his employer or any other views of a company. These are his personal opinions which are formed on his own. Also, products improve over time and some things maybe out of date. Please feel free to contact us and request an update and we will be happy to assist. Thanks!~

Is #VCDX Cert worth it?? Legitimate Question?

SO thoughts on the #VCDX?

I know there are a lot of buzzwords out their about certifications and such.  Especially when you are dealing with the virtualization areas of information technology.  Virtualization is a “hot trend” when it comes to seeing Total Cost of Ownership and Return on Invest numbers – often times this has enterprises throwing money at it.  Though some implementations are becoming more affordable when you look at cutting edge technology at low cost. Take for example being able to virtualize local storage  into shared storage (like HP lefthand or Falconstor).  The cost in the lab is getting expensive due to so many new technologies out there that can save you money. For this reason, big enterprises are offing big salaries to people who can really show them how to save money.

I am sure there are many individuals who have been looking at a VCDX for a while.  The fact of the matter, with all the new enhancements to virtuatlization technologies  the engineering and architecture has reached a whole new level.  I think this is where the VCAP exams come in to preserve and protect the credibility of a VCDX.  I agree, a VCDX should not be easily attained and seems by adding the VCAPs to the scenario makes it a little more challenging.  Considering now you have to pay for two additional exams on top of the VCDX certification.  Lately when looking at job postings I saw one even have “VCP – VCAP preferred or higly desired”.  Sometimes it makes me wonder if just getting a VCAP-DCA or DCD is an overall  better goal.  Don’t get me wrong I am not knocking the VCDX it is a great certification, but I would just like to hear some feedback on the subject.

I have a friend on twitter who stated “I would rather have it as more of a personal achievement” – Looking now, I am beginning to feel the same.  Feel free to comment away!

***Disclaimer: The thoughts and views expressed on VirtualNoob.wordpress.org  and Chad King in no way reflect the views or thoughts of his employer or any other views of a company. These are his personal opinions which are formed on his own. Also, products improve over time and some things maybe out of date. Please feel free to contact us and request an update and we will be happy to assist. Thanks!~

%d bloggers like this: