Blog Archives

vSphere Experience 101 – Beginnings and VDI

Welcome to my world of vSphere. Here I will be sharing my experiences with vSphere in all the challenges we have faced with the various implementations of it. Being apart of one of the largest infrastructures in the world we have both learned from our mistakes and have learned the importance of “best practices” when working within a VI (Virtual Infrastructure) environment. As with most large enterprise we started first using MS Virtual Servers as our first implementation and then begin using VMware very little. Being a MS shop proved to be beneficial when it came to licensing and such so that is probably why we began using it. Oh those were the days…

After much testing in the lab I noticed that the virtual environment began to grow dramatically. We soon had clusters in our lab testing and working with various implementations. The MS Virtual Servers sat in the racks still running the 3 or 4 VM’s that they had always ran in the past even though the solution never really grew much. Hyper-V soon came out but after seeing the maturity of VMware over Hyper-V the choice still seemed pretty obvious. testing still went on and slowly bled out into our production environment. We had some clusters of Dell PE2900’s and R900s that were being used for some of our Homeoffice Data Center servers. Finally after slow implementation in the third quarter of 2008 I got to experience the first large implementation of VI3 in our environment.

The first large-scale implementation used DELL PEM1000 Blade Systems fully populated with M600s. Attached for storage with Equal Logix iSCSI arrays. Each host had 4 NICS and 32 GB of RAM (which obviously at the time 4 NIC seemed plenty but ended up to little) This was to begin our first Desktop Virtualization implementation using ESX 3.5 update 4 or VDI (Virtual Desktop Infrastructure).

Being the first initiative to help cut down large cost associated with vendors using desktops sprawled all throughout our technical center. You could walk the isles of our tech center and see anywhere from 1-12 desktops stacked up that various technical teams used for other vendors or even themselves for remoting, support, and developmental services. When it was all said and done I am sure the number of systems went well over a few thousand. That is a huge amount of overhead when you even look at the power cost alone. This VDI solution was the remedy to this problem but also being the first implementation didn’t make it our best. Little did we know about the risk of “disk alignment” and the effects of it when using the iSCSI arrays with our VDI environment (Virtual Desktop Infrastructure). For the brokers aka Gateway servers we used Citrix SSL – Net Scalers, and from there we managed various applications that were streaming to the VDI enviroment and other TC’s (thin cients).

All users were setup and manged from the Citrix broker servers. We already had licensing for citrix purchased so this made sense to simply utilize what we have for this portion of the VDI environment. One other problem we saw with this implementation was that the engineers didn’t plan HA very well in the layout for host vs. blade chassis. Instead of spreading out the host between blade chassis they put all 16 host on a single blade chassis. This could prove to be a huge impact in the event that an entire blade chassis went down – it would affect an entire DRS/HA cluster. Also due to the “disk alignment” issue we had to perform a storage migration to remedy this problem and was an extensive project. This was a large-scale project which included many hours of labor all which could have been avoided if proper thought and research was performed. The real downer was that this had ruined the chance of iSCSI to prove itself as a P.O.C. (Proof of Concept).

Lesson learned and well for that matter. Now we are beginning our 2nd phase of our VDI environment and we are using High Performance NetAPP Arrays using NFS based storage. We still use the old Dells but currently use HP BL class systems. The new host had 72 GB RAM 2 Quad Core processors and 6 NICs – which I still believe would’ve been just worth doing 2x10GbE. The old iSCSI Equallogix storage is currently unused and the DELL Blades are also utilizing the NetAPP NFS as well. The second phase is using vSphere 4 and we are also using the PSA for NetAPP which is quite nice. Though we are not using all the features of the NetAPP storage solution the performance has been much more dramatic than the iSCSI stuff from the past. Of course good planning and engineering is always a good thing. This time we spread out the host between blade chassis. There are approximately 16 host per DRS/HA cluster and we placed 4/16 of those host on separate chassis. So in the event of a chassis failure or network issue we would only lose 4 host tops. Currently we manage failover capacity by %.

Though the VDI is still relatively new to us we are still learning as we go along about the Do’s and Don’t. One thing is for certain is that we will have to get all our 3.5 host to 4.1. Our new environment is being used for Dynamic Desktop Virtualization. Its getting closer each day and we finally got to give it a run when VMware View 4.5 was released. So far we have liked what we have seen. Though I think we will still use Citrix for profile management/apps and MS App-V in the future.

***Disclaimer: The thoughts and views expressed on VirtualNoob.wordpress.org  and Chad King in no way reflect the views or thoughts of his employer or any other views of a company. These are his personal opinions which are formed on his own. Also, products improve over time and some things maybe out of date. Please feel free to contact us and request an update and we will be happy to assist. Thanks!~

Advertisements

Virtualization – Looking back?

So in the past couple of weeks I have been looking at all sorts of different hardware platforms to run a vSphere environement. In our environment at work we run all kinds of different implementations. We use Dynamic Desktop Virtualization (DDV), Virtual Desktop Infrastrucuture (VDI), Remote site virtualization, and finally Server Virtualization. We started late in the game but have made relitively great leaps in moving forward. We currently have about 5000 VDI desktops, 300 DDV desktops (Very New), and over 2000 Virtual Servers at 2 data centers, Not to mention the remote sites as well.

So obviously this begs the question. Why haven’t we looked at what UCS, vBlock, or other hardware platforms have to offer? Something tells me in the heat of getting things to a virtual platform we had to just go with something very quick. Our big virtual push begin over a year ago and as of today we still leverage HP Hardware systems and most recently built a cluster of 12 DL380’s which are performing much better then what we have seen with our previous implementations. This however still doesn’t address other concerns like centralized management, Randomized workloads, and consolidation of other things like networking – which UCS and HP can bring to the table. Not to mention Xsigio? I probably left a few out but please forgive my ignorance..

When its all said and done I am looking back going why? I still think it would be great to see what UCS could bring to the table but in our environment we deal with many different types of workloads. We have UNIX, Windows, Essbase, Oracle, SQL, Active Driectory (One of the largest), and this is to name only a few and have yet to begin to virtualize these systems. Given we have gotten some Dev and Cert areas done but we all know production is a different beat altogether. We implemented HP Blades on our first go around – no 10g there. Now we have implemented DL380’s and still no 10g there as well. Cisco UCS can bring a lot more ROI in the long run when you just look at network consolidation and management. However, performance isn’t really where we are hurting – the place I see most improvement in the virtual infrastructure is the storage side of things. How do you maintain that solid disk read/writes with out running into hot spots and taking a huge hit in performance? I have seen systems like IBM XIV and NetAPP IPAM card that can really help with that though I think NetAPP is better due to how are environment has many different kinds of workloads. I really hope we see different hardware platforms being tested this next year and quit taking all this stuff from HP. Don’t get me wrong I like HP and they do have good products for virtualization I personally would like to see some competition and anwers to some of these questions. Anyways here is a bit on our environment.

Feel free to comment!

***Disclaimer: The thoughts and views expressed on VirtualNoob.wordpress.org  and Chad King in no way reflect the views or thoughts of his employer or any other views of a company. These are his personal opinions which are formed on his own. Also, products improve over time and some things maybe out of date. Please feel free to contact us and request an update and we will be happy to assist. Thanks!~

%d bloggers like this: