Advertisements

Blog Archives

Virtualization – Looking back?

So in the past couple of weeks I have been looking at all sorts of different hardware platforms to run a vSphere environement. In our environment at work we run all kinds of different implementations. We use Dynamic Desktop Virtualization (DDV), Virtual Desktop Infrastrucuture (VDI), Remote site virtualization, and finally Server Virtualization. We started late in the game but have made relitively great leaps in moving forward. We currently have about 5000 VDI desktops, 300 DDV desktops (Very New), and over 2000 Virtual Servers at 2 data centers, Not to mention the remote sites as well.

So obviously this begs the question. Why haven’t we looked at what UCS, vBlock, or other hardware platforms have to offer? Something tells me in the heat of getting things to a virtual platform we had to just go with something very quick. Our big virtual push begin over a year ago and as of today we still leverage HP Hardware systems and most recently built a cluster of 12 DL380’s which are performing much better then what we have seen with our previous implementations. This however still doesn’t address other concerns like centralized management, Randomized workloads, and consolidation of other things like networking – which UCS and HP can bring to the table. Not to mention Xsigio? I probably left a few out but please forgive my ignorance..

When its all said and done I am looking back going why? I still think it would be great to see what UCS could bring to the table but in our environment we deal with many different types of workloads. We have UNIX, Windows, Essbase, Oracle, SQL, Active Driectory (One of the largest), and this is to name only a few and have yet to begin to virtualize these systems. Given we have gotten some Dev and Cert areas done but we all know production is a different beat altogether. We implemented HP Blades on our first go around – no 10g there. Now we have implemented DL380’s and still no 10g there as well. Cisco UCS can bring a lot more ROI in the long run when you just look at network consolidation and management. However, performance isn’t really where we are hurting – the place I see most improvement in the virtual infrastructure is the storage side of things. How do you maintain that solid disk read/writes with out running into hot spots and taking a huge hit in performance? I have seen systems like IBM XIV and NetAPP IPAM card that can really help with that though I think NetAPP is better due to how are environment has many different kinds of workloads. I really hope we see different hardware platforms being tested this next year and quit taking all this stuff from HP. Don’t get me wrong I like HP and they do have good products for virtualization I personally would like to see some competition and anwers to some of these questions. Anyways here is a bit on our environment.

Feel free to comment!

***Disclaimer: The thoughts and views expressed on VirtualNoob.wordpress.org  and Chad King in no way reflect the views or thoughts of his employer or any other views of a company. These are his personal opinions which are formed on his own. Also, products improve over time and some things maybe out of date. Please feel free to contact us and request an update and we will be happy to assist. Thanks!~

Advertisements

Cisco UCS – A lot for me to learn

Hello Hardware,

That’s right you heard me say it alright. CISCO UCS or in twitter terms #CiscoUCS #Cloud.  Tonight I got my first stab at actually researching, and read up on Cisco UCS, and I have to say, its does sound promising.  Right now, though, I haven’t given much thought to the cost of such a system.  Lately we see a lot of different offerings when it comes to hardware platforms to run a virtual shop on, and up til recent I haven’t even read about or seen a Cisco Server in a while.  In fact, the last time I saw a Cisco server was when call manager was running on Windows 2000 SP4 (HP MCS Hardware) back on version 5.5.  I guess I am beginning to get old…

Enough Said… let’s move on… Nothing to see here..

The first reading I did on Cisco UCS  was today on Ciscos site: http://bit.ly/grL4EY

Joe wrote on inter-fabric communication on the Cisco blade servers.  It peaked my interest seeing how UCS is uniquely designed to handle communication.

You can run the fabric interconnects in two seperate modes: End-Host Mode and Host Mode (EHM and HM).  Most users typically choose EHM for simplicity. It took me a while to get it all to sink in, but I think I finally got it in a nutshell.  The big point is that you can have 10GbE, and if you need to manage traffic more effectively, at the host level, you can utilize vSphere switching such as: vSS, dVS, and Cisco Nexus 1000V.  Essentially, the Cisco 1000V is what you can use to make it even more managable.  It also seems like it is definitely more geared to the clouddue to the so-called simplicity.  You still will have to utilize 10gbe networks which can still cost a pretty penny.  I am just glad it is finally beginning to make sense… at least right now..

Props to Joe who did a good job and I think he even knows a thing or two about vmware.  😉

Thanks to ADAM Hash tag corrected!

***Disclaimer: The thoughts and views expressed on VirtualNoob.wordpress.org  and Chad King in no way reflect the views or thoughts of his employer or any other views of a company. These are his personal opinions which are formed on his own. Also, products improve over time and some things maybe out of date. Please feel free to contact us and request an update and we will be happy to assist. Thanks!~

%d bloggers like this: