vSphere – vCloud – Fast Provisioning – My Thoughts…

Yea, some would say this post is probably overdue but lately I have sincerely been thinking. Have we been drinking some Kool-Aid around this feature? I couldn’t help but have some concerns around possible implementation of this feature in VCD installments. I in particular, am not sold on it completely. Here are just some quick reasons for me that didn’t exactly sell me.

  1. It’s a very “new” feature in regards to VCD which is still early in its years as a cloud platform.
  2. No way of currently updating those linked clones unlike VMware View. (some admin over head as well as using local and shared catalogs)
  3. Added complexity (with linked images, snap chains, and how you have handle storage motion)
  4. By Default ALL linked clone images are mis-aligned. (VMware has yet to address this problem) In some cases this could be a compounding factor causing some additional I/O overhead.
  5. Design has to be highly considered and evaluated with a max of 8 node clusters (This will affect current installments as well)

So yeah, I know I look like the bad guy but I seriously think this release was just a target more to SMB than anything. IMO, this is more like a feature for those of smaller businesses because now they don’t have to go out and spend all that crazy dough on a VAAI capable array (Hooray for them :)) which begs to question….

Why do you need to enable this feature if you already leverage VAAI capable arrays?

It just seems to me that Fast Provisioning is a little pre-mature in its release. Although VCD continues to improve I think this features needs some serious improving before some bigger shops may decide to utilize it. The other down is that we have yet to see any real progress on the UNMAP problem and it’s now treated as a manual task we should run during certain times… or outages I should say. That really blows because we all know what kinds of benefits and problems thin provisioning on some array can cause. For the most part, it’s just really bad reporting… lol.

Here are some other sources I would recommend reading and I seriously think you should read them and learn for yourself if it’s really worth it. Also, be careful not to put the cart before the OX and do your homework. Some people drink the kool-aid and don’t think to question or ask “What’s really under the hood?”. Fast Provisioning should never be compared to VMware View… It’s similar but not identical.. I would definitely recommend reading Nick’s blog it opened my eyes to what he calls the “Fallacies” and of course Chris has a good read.

http://datacenterdude.com/vmware/vcd-fast-provisioning-vaai-netapp/
http://www.chriscolotti.us/vmware/info-vcloud-director-fast-provisioned-catalog-virtual-machines/
http://www.kendrickcoleman.com/index.php?/Tech-Blog/vcloud-director-15-features-that-effect-limitation-and-design.html

Advertisements

vSphere – Networking – ESXi Single NIC VDS Management Migration

Well, I wasn’t sure how to name this blog as VMware continues to use all kinds of different lingos for all of their bells and whistles. I had the unique opportunity to begin working with migrating management interfaces or also know as vmkernel interfaces around from VSS to the DVS switching. This present a lot of struggles but it seems to me that VMware has really improved this functionality in the later versions of vSphere. I recall running into many kinds of issues when doing this on 4.0. So far using a vCenter 5 server with a mix of 4.1 and 5.0 host testing has proved to be seamless and non-interruptive. However, I would still highly recommend considering all your options and testing this method THOROUGHLY before ever touching production environments.

I was able migrate a single physical NIC running ESXi management from a VSS to a VDS. This video covers how I did that. The reason for the video was because I got all kinds of senseless google links when trying to search for something documented. So, I did myself a favor and published one.

Remember, this is a test and this is only applicable for me to use in a few environments. In most cases I use redundant NICs. Now the real kicker about this is that to migrate from a VDS to a VSS requires a bit more thinking and planning. Especially if you only got access to a single PNIC. Maybe I will cover that some other time… for now try to use two. Also, this may be a solution for environments running single 10GB and need to use PVLANS or centralize managment.

Up and Coming – CommVault Round 3…?

No way…

Yes, you read it right. Be expecting more on this to come over the next couple of weeks as I learn more. I will get to play with it in some capacity so maybe I can share some good stuff. I am going to be covering some things around considerations and designs – hopefully. We all know that this was a hot topic a while back but now that the dust has settled maybe I can have some progress. CommVault has made some pretty strong improvements over the past couple of months. They have covered some of the pain points I had with it in a pretty good amount of time. I don’t think this would’ve happened had some folks not made a point to show some of these things openly. My number one gripe back in the day was how CommVault NOR VEEAM had “TRUE” vCloud Director compatibility – IN MY OPINON (all caps shows my emphasis). I am sure that by now this and maybe a few other things have changed. I am not sure I will be doing any comparisons per se but it will be good to see how the latest greatest stacks up. I think by now I can easily say I know what industry standards look like for VMware backups and the expectations and performance should be. One thing you can take to the bank is that I plan to do my home work just like last time. If there is something I don’t like or think needs improved I will most certainly write about it. All the folks out there that read this blog need to understand there isn’t a lot of information out there around some of these topics I cover. I would encourage any person reviewing a CommVault solution to do your homework. There are a lot of things to consider when going with a backup product. Let’s hope this time around I don’t have to pull any punches…

Update vSphere 5 – My two cents err problems

What’s the deal man?

Well to be honest I have ran into two very specific issues and what I want to iterate is how crucial it is to review updates before just deploying a normal vSphere 5 implementation. First off, I want to say that in the middle of my experience with performing the upgrade to vSphere 5 the release of Update 1 occurred. So with that being said comes the dilemma. Coordinating the update process and procedure should always be critical. You should also do your due diligence and review the updates along with bugs. I have to honestly give credit to the VMware Community which has definitely allowed me to identify problems before hand and how to avoid and workaround those. Now on to my issues.

Issue Number 1:Broken sVmotion (Storage Migration)

Well, this one was obvious but being the optimist I am didn’t think I would run into this little issue. However it appears to be a ESXi special feature for vSphere 5! I would highly recommend reviewing the following issues if you are having problems performing storage vMotions on vSphere 5/vCloud 1.5. I believe it is actually an issue with the ESXi hypervisor because prior to Update 1 there was a patch you could install on your ESXi box. Please see the following references for resolution:
http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2012122

FIX? > Install UPDATE 1 or ESXi Patch 2

Issue Number 2: vCenter Network Alarm Feature!

So, key words to stress in this issue is probably one that makes many CRINGE. Test and Prod should always be the SAME > We all know how important that is but SERIOUSLY how many of us actually MIRROR everything even the alarms? This is more of an issue with standards and procedures then anything… again I am reminded of the 9 parts planning and 1 part implementing or the “Your poor planning doesn’t account for an emergency on my part”.

If the following statement doesn’t tell you what happened then this KB most certainly can.. 🙂
http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2007231

FIX? > Yeah just the read the kb its quite ridiculous… oh wait just install update 1?

So, I am writing this tell you that I would recommend applying or using the in place upgrade for vSphere 5 Upate 1. Oh, and just so you know I warned it still doesn’t support the following build number:

NOTE: I would highly recommend to updating from vanilla 4.1 to avoid the special VMware feature of PSOD.

Last but not least I equally thought it would be important to highlight a video that we can all share and relate to when facing unexpected results…. It’s not exactly the same but I can definitely relate to the frustrations..

Enjoy!

vSphere 5 – Storage pt.2 vCloud and Vsphere Migrations

The point..

So on my last post I covered some things to think on when looking at the new VMFS-5 partitions. Obviously the point in moving to the new VMFS would be to gain all the benefits as explained in that previous post. One thing you will see in this post are just the types of migrations. I also want to highlight that I shared some resources on the bottom for those of you who may want to review some deeper highlights. Obviously there isn’t a ton of documentation out there highlighting this nor the special *features* for vSphere 5 (sVmotion issues??) that you may run into. So let hope I do this yet further justice. On to the blog!

Adding VMFS-5 to the vCloud

  1. Log in to vSphere and ensure you have a new LUN provisioned (covered above in how to:)
  2. Log into vCloud Director Web Interface and you must be an administrator.
  3. Click “System” tab and click on Provider VDC. Right click a PVDC and select “Open”
  4. After opening the PVDC select the Datastores Tab and then click the +/- button to add/remove datastores

  1. Browse through the datastores by clicking the > button or by searching in the top right. When you have located your datastore highlight it and then click the button then click “OK”. Disregard the warning.


(Note: the yellow highlights are ways you can search and browse through datastores. This is very handy when there are many to look through)


(Note: Highlight in yellow shows the datastore added successfully. This is a 20TB Datastore)

You will now see the datastore in the datastore summary tab for that PVDC

Migrating Virtual Machines for vCloud Director to the “new” VMFS-5 LUN.

  1. Make sure the vApp is NOT a linked clone. If it is a linked clone defer to the references below.
  2. Ensure the Datastore you want to Storage Motion the Virtual Machine to is also provisioned to the Org VDC. Do this by opening the Org vDC and selecting the “Datastores” Tab.

    Note: you can see both datastores are attached to this VDC with the organization known as App1

  3. You could then log-in to vSphere client with the following noted vCenter and perform a storage vMotion. Another way of doing a Storage vMotion could be by using William Lam’s script he wrote as well. (see references below)
  4. If you need to perform the sVmotion defer to the following method below.

NOTE: I would highly recommend that you roll out update 1 to all vCloud components. This addresses a few major fixes that will allow for operations to run more smoothly. More importantly, the only way to sVmotion vCloud VMs is to turn them off. This is a pretty common issue with vanilla vsphere 5/vcloud 1.5 roll outs. I also experienced this problem. For more information please see references at the bottom.

Migrate a Virtual Machine with Storage VMotion in vSphere

Use migration with Storage VMotion to relocate a virtual machine’s configuration file and virtual disks while the virtual machine is powered on. You cannot change the virtual machine’s execution host during a migration with Storage VMotion. (Note: that if VM is managed by vCloud and not at 1.5 update 1 you will need to possibly power off the virtual machine to perform the svmotion. If the virtual machine is a fast provisioned vm (linked clone) then you will need to perform the sVmotion through an API.

Procedure

  • Ensure you are not moving vCloud vApp if you are please follow the above process first.
  • Display the virtual machine you want to migrate in the inventory.
  • Right-click on the virtual machine, and select Migrate from the pop-up menu.
  • Select Change datastore and click Next.
  • Select a resource pool (the same) and click Next.
  • Select the destination datastore:
    To move the virtual machine configuration files and virtual disks to a single destination, select the datastore and click Next.
    To select individual destinations for the configuration file and each virtual disk, click Advanced. In the Datastore column, select a destination for the configuration file and each virtual disk, and click Next.
  • Select a disk format and click Next:
  • Option Description
    Same as Source Use the format of the original virtual disk.
    If you select this option for an RDM disk in either physical or virtual
    compatibility mode, only the mapping file is migrated.
    Thin provisioned Use the thin format to save storage space. The thin virtual disk uses just as
    much storage space as it needs for its initial operations. When the virtual disk
    requires more space, it can grow in size up to its maximum allocated capacity.
    This option is not available for RDMs in physical compatibility mode. If you
    select this option for a virtual compatibility mode RDM, the RDM is
    converted to a virtual disk. RDMs converted to virtual disks cannot be
    converted back to RDMs.

    Thick Allocate a fixed amount of hard disk space to the virtual disk. The virtual
    disk in the thick format does not change its size and from the beginning
    occupies the entire datastore space provisioned to it.
    This option is not available for RDMs in physical compatibility mode. If you
    select this option for a virtual compatibility mode RDM, the RDM is
    converted to a virtual disk. RDMs converted to virtual disks cannot be
    converted back to RDMs.

    NOTE: Disks are converted from thin to thick format or thick to thin format only when they are copied from one
    datastore to another. If you choose to leave a disk in its original location, the disk format is not converted, regardless of the selection made here.

  • Review the page and click Finish.
  • A task is created that begins the virtual machine migration process.

References:

Linked Clones:
http://www.virtuallyghetto.com/2012/04/scripts-to-extract-vcloud-director.html
http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1014249

Storage Motion Issue:
http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2012122

How To’s sVmotion CLI/VCO style:
http://www.virtuallyghetto.com/2012/02/performing-storage-vmotion-in-vcloud.html
http://www.virtuallyghetto.com/2012/02/performing-storage-vmotion-in-vcloud_19.html
http://geekafterfive.com/2012/03/06/vcloud-powercli-svmotion/
http://geekafterfive.com/tag/vcloud/
http://pubs.vmware.com/vsphere-50/topic/com.vmware.ICbase/PDF/vsphere-esxi-vcenter-server-501-virtual-machine-admin-guide.pdf

Storage Considerations for vCloud:
http://www.vmware.com/files/pdf/techpaper/VMW_10Q3_WP_vCloud_Director_Storage.pdf

vSphere 5 – Storage pt.1 VMFS and Provisioning

VMFS:

VMware® vStorage Virtual Machine File System (VMFS) is a high-performance cluster file system that provides storage virtualization optimized for virtual machines. Each virtual machine is encapsulated in a small set of files and VMFS is the default storage system for these files on physical SCSI disks and partitions. This File system enables the use of VMware® cluster features of DRS, High-Availability, and other storage enhancements.
For more information please see the following document here and the following KB here.

Upgrading VMFS:

There are two ways to upgrade the VMFS to version 5 from previous 3.xx. An important for when upgrading VMFS-5 or provisioning new VMFS-5 is that legacy ESX host will not be able to see the new VMFS partitions. This is because of the enhancements made into ESX and the partitioning. Upgrading VMFS-5 is irreversible and consider always what you are doing. Lastly, there are many ways to provision VMFS-5 these are just two of the more common ways of doing it.

Method 1: Online Upgrade

Although an online upgrade does give you some of the new features in VMFS-5 it does not give you all of them. However, it is the least impacting and can be performed at anytime without an outage. Below are the features you will not gain by doing an in-place upgrade:

  • VMFS-5 upgraded from VMFS-3 continues to use the previous file block size which may be larger than the unified 1MB file block size.
  • VMFS-5 upgraded from VMFS-3 continues to use 64KB sub-blocks and not new 8K sub-blocks.
  • VMFS-5 upgraded from VMFS-3 continues to have a file limit of 30720 rather than new file limit of > 100000 for newly created VMFS-5.
  • VMFS-5 upgraded from VMFS-3 continues to use MBR (Master Boot Record) partition type; when the VMFS-5 volume is grown above 2TB, it automatically & seamlessly switches from MBR to GPT (GUID Partition Table) with no impact to the running VMs.
  • VMFS-5 upgraded from VMFS-3 continue to have its partition starting on sector 128; newly created VMFS5 partitions will have their partition starting at sector 2048.

RDM – Raw Device Mappings

  • There is now support for passthru RDMs to be ~ 60TB in size.
  • Non-passthru RDMs are still limited to 2TB – 512 bytes.
  • Both upgraded VMFS-5 & newly created VMFS-5 support the larger passthru RDM.

The end result in using the in place upgrade can be the following:

  • Performance is not optimal
  • non-standards can still be in place
  • Disk Alignment will be a consistent issue with older environments
  • File limit can be impacting in some cases

Method 1: How to perform an “Online” upgrade for VMFS-5

Upgrading a VMFS-3 to a VMFS-5 file system is a single-click operation. Once you have upgraded the host to VMware ESXi™ 5.0, go to the Configuration tab > Storage view. Select the VMFS-3 datastore, and above the Datastore Details window, an option Upgrade to VMFS-5 will be displayed:


Figure 3. Upgrade to VMFS-5

The upgrade process is online and non-disruptive. Virtual machines can continue to run on the VMFS-3 datastore while it is being upgraded. Upgrading the VMFS file system version is a one-way operation. There is no option to reverse the upgrade once it is executed. Additionally, once a file system has been upgraded, it will no longer be accessible by older ESX/ESXi 4.x hosts, so you need to ensure that all hosts accessing the datastore are running ESXi 5.0. In fact, there are checks built into vSphere which will prevent you from upgrading to VMFS-5 if any of the hosts accessing the datastore are running a version of ESX/ESXi that is older than 5.0.

As with any upgrade, VMware recommends that a backup of your file system is made prior to upgrading your VMFS-3 file system to VMFS-5.

Once the VMFS-5 volume is in place, the size can be extended to 64TB, even if it is a single extent, and ~2TB Virtual Machine Disks (VMDKs) can be created, no matter what the underlying file-block size is. These features are available ‘out of the box’ without any additional configuration steps.

NOTE: Some documentation are excerpts and provided and used from VMware Documentation and Sources..

Method 2: Provisioning New VMFS-5

This method explains how to update VMFS without performing an “online” upgrade. Essentially this would be the normal process of provisioning a VMFS LUN for ESXi 5 or older. Here are the listed benefits of VMFS-5 provisioning without doing an “online” upgrade.

  • VMFS-5 has improved scalability and performance.
  • VMFS-5 does not use SCSI-2 Reservations, but uses the ATS VAAI primitives.
  • VMFS-5 uses GPT (GUID Partition Table) rather than MBR, which allows for pass-through RDM files greater than 2TB.
  • Newly created VMFS-5 datastores use a single block size of 1MB.
  • VMFS-5 has support for very small files (<1KB) by storing them in the metadata rather than in the file blocks.
  • VMFS-5 uses sub-blocks of 8K rather than 64K, which reduces the space used by small files.
  • VMFS-5 uses SCSI_READ16 and SCSI_WRITE16 cmds for I/O (VMFS-3 used SCSI_READ10 and SCSI_WRITE10 cmds for I/O).

Other Enhancements:

  • Disk Alignment for Guest OS’s become transparent and have less impact.
  • Performance I/O and scalability become a greater value to running online vs. new.

As you can see the normal provisioning of VMFS-5 is a lot more robust in features and offers a great deal of improvement to just performing an “Online” upgrade. The online upgrade is easy and seamless but for normal considerations all benefits should be considered. In my case the chosen Method would be Method 2. The only instance in which an “Online” upgrade would be considered under normal circumstances would be if you were already at capacity on an existing array. In this type of scenario it could be viewed as a more beneficial way. Also, if you did not have Storage vMotion licensed through VMware further considerations on how to migrate to the new VMFS would have to be made. Migrating workloads to new VMFS-5 would be a bit more of a challenge in that case as well. However this is not an issue under most circumstances.

Method 2: How To provision new VMFS-5 for ESXi

  1. Connect to vSphere vCenter with vSphere Client
  2. Highlight a host and click the “Configuration” tab in the right pane.
  3. Click on “Storage”
  4. In the right pane click “Add Storage” (See image)
  5. Select the LUN you wish to add
  6. Expand the Name column to record the last four digits (this will be on the naa name) In this case it will be 0039. Click “Next”
  7. Select to use “VMFS-5” option
  8. Current Disk Layout – Click “Next”
  9. Name the datastore using abbreviations for the customers name with the type of storage followed by the LUN LDEV (Yes, a standard). This example would be “Cust-Name”=name “SAN”=type “01”= Datastore Number “LDEV” = 0038. (cus-nam-san-01-1234)
  10. Select the radio button “Maximum available space” click > Next
  11. Click Finish and watch for the “task” to complete on the bottom of vSphere client
  12. After the task completes go to the Home > Inventory > Datastores
  13. Make sure there is a Fiber Storage folder created. Under that folder create a tenant folder and relocate the datastores in the new tenant name folder.
  14. After moving the folder you may need to provision this datastore for vCloud. Proceed to the optional method for this below.

Note: Some of the information contained in this blog post is provided by VMware Articles and on their website http://www.vmware.com

vSphere Networking 101 – Renaming VMnics – Backup and Restore

This is a super simple tutorial that I wanted to do on how to rename VMnics. This is great for network card replacement and is just a good thing to know in case something does go south on a hardware replacement. Let’s move on.

SSH to your host and try to keep in mind to use a DRAC or ILO just in case. There should be no need to do this if you are not touching the management or service console uplinks for ESX or ESXi.

  1. run:
  2. cp -p /etc/vmware/esx.conf /etc/vmware/esx.conf.backup (This will backup the current configuration seek kb here)
  3. VI /etc/vmware/esx.conf
  4. scroll down and locate the Dev/ids and these will be followed with a =vmnic#
  5. Type i
  6. go over to the VMnic you wish to modify
  7. delete of backspace
  8. Type in the right name
  9. Hit “ESC”
  10. Type “:wq” yes that is a colon wq.

You can either reboot or try cycling services but that is pretty much all that really to it. Enjoy and thinks for stopping by!

Youtube video:

VMware vSphere Labs – Infrastructure – Setting Up Active Directory on Windows 2008 R2

This Tutorial runs through a quick overview of installing Active Directory 2008 R2 on a Windows Virtual Machines running in VMware Workstation 8. It has a Video and general instructions to help you out. Enjoy!

  1. Deploy from the template
  2. Configure NICS Static
  3. Disable Extra NIC
  4. Gateway and DNS are the Gateway list in “Virtual machine Editor”
  5. Keep DNS as the secondary DNS of the Domain Controller
  6. Rename machine to appropriate Computer Name to reflect your Domain Controller (sysprep gives silly names)
  7. Reboot
  8. Add Role from server manager
  9. Select Active Directory Domain Services
  10. Yes, Install the .Net Stuff….
  11. Run DCPRomo.exe from powershell or within the server manager under AD role
  12. Install DNS (if not you must be doing something a bit more advanced :))
  13. Reboot and validate you can log into AD with a Domain Account.
  14. Join another Virtual Machine to the Domain

VMware vSphere – Networking Best Practices – Networking 101 pt.2

Okay Experts take it easy on me…

As you know I have been writing various post around building your VMware Workstation Lab. One of the key points that I am trying to drive during this lab build is how to get your environment as closely matching a production environment as you can. Obviously networking is a very broad subject especially when it comes to implementing a production vSphere environment. I am going to attempt to do this topic justice by sharing some key points that should help you understand more of how you should design your network. I am not going to attempt to do what Kendrick Coleman does (because his designs are solid). I am only going to provide talking points and recommendations about traffic separation, why it should be separated, and details. Also, keep in mind that the most important factor of networking is using N+1 or not to use N+1. I will say that it is highly recommended from VMware to have your physical networking N+1 so you can benefit further from High-Availability. So let’s get started with the traffic types.

Traffic Types:

  1. Management (High Availability)
  2. vMotion
  3. Fault Tolerance (Not in all cases)
  4. VM Networks
  5. Backup (Not in all cases)
  6. Storage/NAS (Depends on the type)

Note: backup and Storage say depends because in some cases you may or may not have iSCSI/NAS storage or be running backups for your virtual machines, Especially if you use a product like Veeam or CommVault. Fault tolerance isn’t really used and I believe that even when it does get better it still may not be worth it, considering all the bigger workloads and cost in licensing as well. Here are my recommendations and best practices I follow for dedicating traffic:

  1. Management: If possible: VLAN it, Separate the traffic (to a different switch), Use teaming or a single Nic (if you set up a MGMT kernel on another port group), You can run/share traffic with vMotion, Fault Tolerance, Backup, and Storage NAS. If you do share traffic use some sort of QOS or Network I/O control. BE mindful that running management with all this traffic isn’t recommended but this would provide you a way to run all this traffic over a separate switch a part from production VM traffic. If you have plenty of NICs then you can run it over the VM production network (but you don’t want to expose it to that network) but you must somehow separate it with a different subnet or VLAN. Most cases I see vMotion and MGMT being shared with Fault Tolerance (FT with big 10GB networks). Your NIC teaming should use explicit failover and over-ride so your vMotion/FT traffic will go over a seperate interface then your management traffic.
  2. vMotion-FT-Backup-Storage-NAS: L2 traffic, hopefully doesn’t have to be routed, in most cases I see this and management traffic being shared, especially with 10GB. vMotion+FT+Backup+NAS if you don’t have a ton of connections. On this particular set up it would be good to setup Jumbo Frames. This traffic you wouldn’t want running over production if possible so a dedicated switch would be really good, also VMware recommends using a dedicated storage switch anyways.
  3. VM Networks: I usually dedicate two NICs for VM production traffic and usually create separate port groups for each type of VM related traffic. In some cases you may have a customer who requires separating this out over different NICs. Again this is just one of those you have to look at based on requirements at that time. Normally the ladder is good enough.
  4. Storage/NAS and Backup: In most cases businesses may have their own backup network. You could run storage and backup traffic over those switches if you choose. In that case, you mines of well also run vMotion and FT.

The Switches and considerations:

You usually want 2 types of switches if that is all you can do. In some cases if you go 4 that would be even better because then you can look at N+1. Where you can try to separate the big traffic from the little traffic (Management). If you cannot separate it by using dedicated switches then use QOS or NIOC to control the traffic. Managed switches can be expensive, just remember in vSphere 5 support for LLDP came into being so you don’t have to worry about buying CISCO stuff to get that CDP information. If you do not plan on using a Converged Network Architecture (FCoE) then be sure to buy enough 1GB Nics. These things are cheap you can load them up even if you may not use them. Things like migrations and stuff come up and if you only buy what you need you’ll end up robbing Peter and paying Paul.

This is really just a quick overview and recommendations. Unfortunately we only have what we are given in most cases. We also work off budgets.  I am going to cover some lab exercies that break this down even further.  General Stuff… I hope you enjoy it and I am sure I am going to be updating it as well.

Cheers,
Cwjking

[Resolved] VMware Workstation 8 – Windows XP VM Hang Issue

This video explains how I solved my own issue after I upgraded to VMware Workstation 8. It seems going throught he process of removing and adding virtual devices narrowed mine down the A: drive or aka Floppy Disk. I just simply disconnected it through VMware Workstation and I would recommend removing it entirely if you do not need it. To this you would need to power down the VM. Just for the record I did an upgrade for VMware workstation, however it completely uninstalls and reinstalls the new version.

1. Power Down the VM

2. Right Click and go to “Settings”

3. Click on the Floppy Drive

4. Click Remove

5. Click Ok

*NOTE* You can also just uncheck the connect at power on option as well. I hope this fixes your issue as well.

%d bloggers like this: