VMware vSphere – Networking Best Practices – Networking 101 pt.2

Okay Experts take it easy on me…

As you know I have been writing various post around building your VMware Workstation Lab. One of the key points that I am trying to drive during this lab build is how to get your environment as closely matching a production environment as you can. Obviously networking is a very broad subject especially when it comes to implementing a production vSphere environment. I am going to attempt to do this topic justice by sharing some key points that should help you understand more of how you should design your network. I am not going to attempt to do what Kendrick Coleman does (because his designs are solid). I am only going to provide talking points and recommendations about traffic separation, why it should be separated, and details. Also, keep in mind that the most important factor of networking is using N+1 or not to use N+1. I will say that it is highly recommended from VMware to have your physical networking N+1 so you can benefit further from High-Availability. So let’s get started with the traffic types.

Traffic Types:

  1. Management (High Availability)
  2. vMotion
  3. Fault Tolerance (Not in all cases)
  4. VM Networks
  5. Backup (Not in all cases)
  6. Storage/NAS (Depends on the type)

Note: backup and Storage say depends because in some cases you may or may not have iSCSI/NAS storage or be running backups for your virtual machines, Especially if you use a product like Veeam or CommVault. Fault tolerance isn’t really used and I believe that even when it does get better it still may not be worth it, considering all the bigger workloads and cost in licensing as well. Here are my recommendations and best practices I follow for dedicating traffic:

  1. Management: If possible: VLAN it, Separate the traffic (to a different switch), Use teaming or a single Nic (if you set up a MGMT kernel on another port group), You can run/share traffic with vMotion, Fault Tolerance, Backup, and Storage NAS. If you do share traffic use some sort of QOS or Network I/O control. BE mindful that running management with all this traffic isn’t recommended but this would provide you a way to run all this traffic over a separate switch a part from production VM traffic. If you have plenty of NICs then you can run it over the VM production network (but you don’t want to expose it to that network) but you must somehow separate it with a different subnet or VLAN. Most cases I see vMotion and MGMT being shared with Fault Tolerance (FT with big 10GB networks). Your NIC teaming should use explicit failover and over-ride so your vMotion/FT traffic will go over a seperate interface then your management traffic.
  2. vMotion-FT-Backup-Storage-NAS: L2 traffic, hopefully doesn’t have to be routed, in most cases I see this and management traffic being shared, especially with 10GB. vMotion+FT+Backup+NAS if you don’t have a ton of connections. On this particular set up it would be good to setup Jumbo Frames. This traffic you wouldn’t want running over production if possible so a dedicated switch would be really good, also VMware recommends using a dedicated storage switch anyways.
  3. VM Networks: I usually dedicate two NICs for VM production traffic and usually create separate port groups for each type of VM related traffic. In some cases you may have a customer who requires separating this out over different NICs. Again this is just one of those you have to look at based on requirements at that time. Normally the ladder is good enough.
  4. Storage/NAS and Backup: In most cases businesses may have their own backup network. You could run storage and backup traffic over those switches if you choose. In that case, you mines of well also run vMotion and FT.

The Switches and considerations:

You usually want 2 types of switches if that is all you can do. In some cases if you go 4 that would be even better because then you can look at N+1. Where you can try to separate the big traffic from the little traffic (Management). If you cannot separate it by using dedicated switches then use QOS or NIOC to control the traffic. Managed switches can be expensive, just remember in vSphere 5 support for LLDP came into being so you don’t have to worry about buying CISCO stuff to get that CDP information. If you do not plan on using a Converged Network Architecture (FCoE) then be sure to buy enough 1GB Nics. These things are cheap you can load them up even if you may not use them. Things like migrations and stuff come up and if you only buy what you need you’ll end up robbing Peter and paying Paul.

This is really just a quick overview and recommendations. Unfortunately we only have what we are given in most cases. We also work off budgets.  I am going to cover some lab exercies that break this down even further.  General Stuff… I hope you enjoy it and I am sure I am going to be updating it as well.

Cheers,
Cwjking

Advertisements

About Cwjking

I am an IT professional working in the industry for over 10 years. Starting in Microsoft Administration and Solutions I was also a free lance consultant for small businesses. Since I first saw virtualization I have always been fascinated by the concept. I currently specialize in VMware technology. I consult daily on many different types of VMware Solutions. My current role is hands on administration, technical design, and consulting.

Posted on March 1, 2012, in Cloud, vSphere 4, vSphere 5 and tagged , , , , , , , , . Bookmark the permalink. 1 Comment.

  1. Thanks for this ! Keep the good stuff coming .

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: