Advertisements

Category Archives: Cloud

Going Sky High

Cloud Management – Turbonomics is Awesome!

Hello my friends, It has been quite a while since I last blogged but I wanted to take some time to share some of my experience over the past couple of years. I have had the opportunity to work with some great companies, people, and it has definitely been a very enlightening experience.

I had the privilege of being apart of a special project nearly 5 years ago which began my career in the cloud. I got to engineer and deploy one of the nations first ever GSA clouds which was a great experience. As time rolled on and cloud was adopted many things came into the light. Being a VMware savvy guy I really didn’t have all the time to spend learning all these new technologies which were directly competing. At this time, Amazon was getting big, VMware was about to release VRA, and the market stood still… or so it felt.

Microsoft had launched their On-Prem cloud and before we knew it we had to start getting serious about the cost of our delivery and compute. If you have never had the pleasure of working for service providers let me tell you – its all about cost. So we put Azure to the test, compared it, vetted it, did anything we could to ensure it could be operationally supported. It was a very interesting time and nice comparison to our existing IaaS architecture. We definitely had our work cut out for us.

Since then the challenges of hybrid cloud have become real. Although some vendors had good solutions like UCS Inter-Cloud Fab or vCloud Connector… (insert whatever else here) we always seemed to have unique enough requirements to disqualify it. Needless to say we still deployed, stood them up, tested them, and found great value it still wasn’t justifiable enough for us to warrant a change. Being a service provider isn’t about offloading to another cloud… it’s about how you can upsell your services and provide more value for customers.

As time grew on people adopted Cisco UCS into their infrastructures and eventually it seemed like updating and maintaining infrastructure became critical and the speed of delivery is only hindered by how fast we can adopt new offerings.. If we cannot seamlesly update, migrate, or refresh to new then what can we do?

“Its so old its not even supported!”
“Wow, no new firmware for 5 tears?!”
“Support for VMware has lapsed :(”

“Who cares?”

You can automate this pain away easily. Just because one vendor doesn’t support a feature or a new version does not mean you have to still burden your IT staff. If you can standardize operational processes between your cloud(s), Visibility, Integration, and Support – would you?

The biggest challenge is getting out of the old and into the new. Most legacy infrastructure runs on VMware and you can do this with Turbonomics and a variety of other tools. One of the benefits of going 3rd party is that you don’t have “lock-in” to any infrastructures or software. You can size it, optimize it, price it, and compare it to ensure things run as they should. Versioning, Upgrades, and these things will always be challenge but as long as you can ensure compliance, provisioning, optimization, and performance it won’t be an after thought. I found Turbonomics to always get the job done and always respond in a way that provided a solution and more than that… at a push of a button.

Some of the benefits:
– Agnostic Integration with a large set of vendors
– Automated Provisioning for various types of compute
– Easily retrofit existing infrastructure for migration
– Elastic compute models
– Cost Comparison, Pricing Existing, Etc…
– I.e. Amazon AWS, Azure
– Track and exceed your ROI Goals
– Eliminate Resource Contention
– Automate and Schedule Migrations between Compute Platforms (Iaas > DBaaS)
– Assured performance, control, and automated re-sizing
– Not version dependent and can be used in a wide variety of scenarios – I.e. I can elaborate if needed.
– Get rolling almost instantly with it…

5 years and I still think Turbonomics is a great product. I have used it extensively in the early days and also worked with it during the vCloud Integration piece. The free version is also amazing and very helpful. Spending time checking capacity, double checking data, ensuring things are proper and standard, all that stuff you can forget about it. Configure your clouds; private, public, or dedicated into Turbonomics quickly.

You just have to trust proven software especially if its been 7 years in the making and exceeds capabilities that most tools require significant configuration for. Also, always keep in mind that TURBONOMICS can learn your environment and the value of understanding the platform and providing insight can be huge.  You have to admit that some admins may not understand or know other platforms. This simplifies all that by simply understanding the workload and infrastructure that it runs on.

Other Great Information or References:
Cisco One Enterprise Suite – Cisco Workload Optimization Manager:
CWOM offered with
http://www.cisco.com/c/dam/en/us/solutions/collateral/data-center-virtualization/one-enterprise-suite/solution-overview-c22-739078.pdf
http://www.cisco.com/c/dam/en/us/solutions/collateral/data-center-virtualization/one-enterprise-suite/at-a-glance-c45-739098.pdf
https://www.sdxcentral.com/articles/news/cisco-launches-new-ucs-servers-hybrid-cloud-management-software/2017/07/

Turbonomics and BMC:
“Running it Red Hot with Turbonomics”
https://turbonomic.com/resources/videos/cloud-economics-on-prem-or-off-with-turbonomic-bmc/

 

Advertisements

vSphere 5 – Storage pt.2 vCloud and Vsphere Migrations

The point..

So on my last post I covered some things to think on when looking at the new VMFS-5 partitions. Obviously the point in moving to the new VMFS would be to gain all the benefits as explained in that previous post. One thing you will see in this post are just the types of migrations. I also want to highlight that I shared some resources on the bottom for those of you who may want to review some deeper highlights. Obviously there isn’t a ton of documentation out there highlighting this nor the special *features* for vSphere 5 (sVmotion issues??) that you may run into. So let hope I do this yet further justice. On to the blog!

Adding VMFS-5 to the vCloud

  1. Log in to vSphere and ensure you have a new LUN provisioned (covered above in how to:)
  2. Log into vCloud Director Web Interface and you must be an administrator.
  3. Click “System” tab and click on Provider VDC. Right click a PVDC and select “Open”
  4. After opening the PVDC select the Datastores Tab and then click the +/- button to add/remove datastores

  1. Browse through the datastores by clicking the > button or by searching in the top right. When you have located your datastore highlight it and then click the button then click “OK”. Disregard the warning.


(Note: the yellow highlights are ways you can search and browse through datastores. This is very handy when there are many to look through)


(Note: Highlight in yellow shows the datastore added successfully. This is a 20TB Datastore)

You will now see the datastore in the datastore summary tab for that PVDC

Migrating Virtual Machines for vCloud Director to the “new” VMFS-5 LUN.

  1. Make sure the vApp is NOT a linked clone. If it is a linked clone defer to the references below.
  2. Ensure the Datastore you want to Storage Motion the Virtual Machine to is also provisioned to the Org VDC. Do this by opening the Org vDC and selecting the “Datastores” Tab.

    Note: you can see both datastores are attached to this VDC with the organization known as App1

  3. You could then log-in to vSphere client with the following noted vCenter and perform a storage vMotion. Another way of doing a Storage vMotion could be by using William Lam’s script he wrote as well. (see references below)
  4. If you need to perform the sVmotion defer to the following method below.

NOTE: I would highly recommend that you roll out update 1 to all vCloud components. This addresses a few major fixes that will allow for operations to run more smoothly. More importantly, the only way to sVmotion vCloud VMs is to turn them off. This is a pretty common issue with vanilla vsphere 5/vcloud 1.5 roll outs. I also experienced this problem. For more information please see references at the bottom.

Migrate a Virtual Machine with Storage VMotion in vSphere

Use migration with Storage VMotion to relocate a virtual machine’s configuration file and virtual disks while the virtual machine is powered on. You cannot change the virtual machine’s execution host during a migration with Storage VMotion. (Note: that if VM is managed by vCloud and not at 1.5 update 1 you will need to possibly power off the virtual machine to perform the svmotion. If the virtual machine is a fast provisioned vm (linked clone) then you will need to perform the sVmotion through an API.

Procedure

  • Ensure you are not moving vCloud vApp if you are please follow the above process first.
  • Display the virtual machine you want to migrate in the inventory.
  • Right-click on the virtual machine, and select Migrate from the pop-up menu.
  • Select Change datastore and click Next.
  • Select a resource pool (the same) and click Next.
  • Select the destination datastore:
    To move the virtual machine configuration files and virtual disks to a single destination, select the datastore and click Next.
    To select individual destinations for the configuration file and each virtual disk, click Advanced. In the Datastore column, select a destination for the configuration file and each virtual disk, and click Next.
  • Select a disk format and click Next:
  • Option Description
    Same as Source Use the format of the original virtual disk.
    If you select this option for an RDM disk in either physical or virtual
    compatibility mode, only the mapping file is migrated.
    Thin provisioned Use the thin format to save storage space. The thin virtual disk uses just as
    much storage space as it needs for its initial operations. When the virtual disk
    requires more space, it can grow in size up to its maximum allocated capacity.
    This option is not available for RDMs in physical compatibility mode. If you
    select this option for a virtual compatibility mode RDM, the RDM is
    converted to a virtual disk. RDMs converted to virtual disks cannot be
    converted back to RDMs.

    Thick Allocate a fixed amount of hard disk space to the virtual disk. The virtual
    disk in the thick format does not change its size and from the beginning
    occupies the entire datastore space provisioned to it.
    This option is not available for RDMs in physical compatibility mode. If you
    select this option for a virtual compatibility mode RDM, the RDM is
    converted to a virtual disk. RDMs converted to virtual disks cannot be
    converted back to RDMs.

    NOTE: Disks are converted from thin to thick format or thick to thin format only when they are copied from one
    datastore to another. If you choose to leave a disk in its original location, the disk format is not converted, regardless of the selection made here.

  • Review the page and click Finish.
  • A task is created that begins the virtual machine migration process.

References:

Linked Clones:
http://www.virtuallyghetto.com/2012/04/scripts-to-extract-vcloud-director.html
http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1014249

Storage Motion Issue:
http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2012122

How To’s sVmotion CLI/VCO style:
http://www.virtuallyghetto.com/2012/02/performing-storage-vmotion-in-vcloud.html
http://www.virtuallyghetto.com/2012/02/performing-storage-vmotion-in-vcloud_19.html
http://geekafterfive.com/2012/03/06/vcloud-powercli-svmotion/
http://geekafterfive.com/tag/vcloud/
http://pubs.vmware.com/vsphere-50/topic/com.vmware.ICbase/PDF/vsphere-esxi-vcenter-server-501-virtual-machine-admin-guide.pdf

Storage Considerations for vCloud:
http://www.vmware.com/files/pdf/techpaper/VMW_10Q3_WP_vCloud_Director_Storage.pdf

VMware vSphere – Networking Best Practices – Networking 101 pt.2

Okay Experts take it easy on me…

As you know I have been writing various post around building your VMware Workstation Lab. One of the key points that I am trying to drive during this lab build is how to get your environment as closely matching a production environment as you can. Obviously networking is a very broad subject especially when it comes to implementing a production vSphere environment. I am going to attempt to do this topic justice by sharing some key points that should help you understand more of how you should design your network. I am not going to attempt to do what Kendrick Coleman does (because his designs are solid). I am only going to provide talking points and recommendations about traffic separation, why it should be separated, and details. Also, keep in mind that the most important factor of networking is using N+1 or not to use N+1. I will say that it is highly recommended from VMware to have your physical networking N+1 so you can benefit further from High-Availability. So let’s get started with the traffic types.

Traffic Types:

  1. Management (High Availability)
  2. vMotion
  3. Fault Tolerance (Not in all cases)
  4. VM Networks
  5. Backup (Not in all cases)
  6. Storage/NAS (Depends on the type)

Note: backup and Storage say depends because in some cases you may or may not have iSCSI/NAS storage or be running backups for your virtual machines, Especially if you use a product like Veeam or CommVault. Fault tolerance isn’t really used and I believe that even when it does get better it still may not be worth it, considering all the bigger workloads and cost in licensing as well. Here are my recommendations and best practices I follow for dedicating traffic:

  1. Management: If possible: VLAN it, Separate the traffic (to a different switch), Use teaming or a single Nic (if you set up a MGMT kernel on another port group), You can run/share traffic with vMotion, Fault Tolerance, Backup, and Storage NAS. If you do share traffic use some sort of QOS or Network I/O control. BE mindful that running management with all this traffic isn’t recommended but this would provide you a way to run all this traffic over a separate switch a part from production VM traffic. If you have plenty of NICs then you can run it over the VM production network (but you don’t want to expose it to that network) but you must somehow separate it with a different subnet or VLAN. Most cases I see vMotion and MGMT being shared with Fault Tolerance (FT with big 10GB networks). Your NIC teaming should use explicit failover and over-ride so your vMotion/FT traffic will go over a seperate interface then your management traffic.
  2. vMotion-FT-Backup-Storage-NAS: L2 traffic, hopefully doesn’t have to be routed, in most cases I see this and management traffic being shared, especially with 10GB. vMotion+FT+Backup+NAS if you don’t have a ton of connections. On this particular set up it would be good to setup Jumbo Frames. This traffic you wouldn’t want running over production if possible so a dedicated switch would be really good, also VMware recommends using a dedicated storage switch anyways.
  3. VM Networks: I usually dedicate two NICs for VM production traffic and usually create separate port groups for each type of VM related traffic. In some cases you may have a customer who requires separating this out over different NICs. Again this is just one of those you have to look at based on requirements at that time. Normally the ladder is good enough.
  4. Storage/NAS and Backup: In most cases businesses may have their own backup network. You could run storage and backup traffic over those switches if you choose. In that case, you mines of well also run vMotion and FT.

The Switches and considerations:

You usually want 2 types of switches if that is all you can do. In some cases if you go 4 that would be even better because then you can look at N+1. Where you can try to separate the big traffic from the little traffic (Management). If you cannot separate it by using dedicated switches then use QOS or NIOC to control the traffic. Managed switches can be expensive, just remember in vSphere 5 support for LLDP came into being so you don’t have to worry about buying CISCO stuff to get that CDP information. If you do not plan on using a Converged Network Architecture (FCoE) then be sure to buy enough 1GB Nics. These things are cheap you can load them up even if you may not use them. Things like migrations and stuff come up and if you only buy what you need you’ll end up robbing Peter and paying Paul.

This is really just a quick overview and recommendations. Unfortunately we only have what we are given in most cases. We also work off budgets.  I am going to cover some lab exercies that break this down even further.  General Stuff… I hope you enjoy it and I am sure I am going to be updating it as well.

Cheers,
Cwjking

Blah Blah Cloud… part 1

When you look at cloud today in context of VMware what is your biggest concern? For some of us it may be networking, others storage, and maybe even focused in a more broader perspective like; availability, scalability, and BU/DR. Since I have been working with vCloud director day in and day out I have been asking some deeper technical questions centered more around scalability of storage and other related components to the overall design. I have been challenged in various ways because of this technology. Prior to vCloud it was vSphere and a lot of how you implemented and managed vSphere was much less complex. Cloud brings another level of complexity – especially if your initial “design and management” is poor to begin with. Usually you end spending more time and money going back addressing issues related to simple best practices that most Architects and Engineers should already know. In some cases it’s a disconnect between that design and infrastructure team and the help desk. This may not always be the case but in my experience it seems to happen more often than naught.

I am sure we could all spend plenty of time talking about operations, procedures, protocols, standards, and blah blah…. but this isn’t the point of this blog…. Even though these things are of the highest importance and the more effort that is put into this the better the results you will get and the less cost you will end up spending. Anyways…

So, as I was saying vCloud has challenged me in several ways. Now not only do I have to consider the design of vSphere, but I also have to look at the design of vCloud director and how we manage all these different components. Even though you simply add vCloud director still doesn’t mean that is in the end of it all. More complexity comes with integration of other applications, Application availability, and Backup and DR. I have been amazed at how many things I see as an oversight due to the lack of expertise in this area. This is no offense to anyone but really VMware is still in its infancy when running against other markets. Though I strongly sense that VMware is going to be majority market share for a while.

Crossing the gaps:
Since I have been studying and learning day and day out covering VMware best practices and other companies best practices (not VMware) I continue to see a lot of disconnects in certain areas (vCloud Director). Storage guys have no idea or clue about running virtualized workloads on Arrays and often times they care not to even want to learn about VMware. Usually they already have plenty to do but this disconnect on some level will affect the implementation. I honestly say that in most cases the Architect should be the one researching and ensuring that all the components which make up the cloud computing stack should be standardize and implemented correctly, even so these gaps still cause setbacks. Which now leads me into the networking side of things. Networking engineers I see are beginning to come up to speed more quickly on virtualization. The main factor of this is because of Cisco UCS and how it appeals to those network administrators and engineers, and add to that FCoE/CNA’s. However, the disconnect once again lies in that knowledge transfer of the virtual platform of how it works and best practices designed around VMware. I first one to say that many don’t really get the choice especially if a company just threw you into the fire. It’s like right now we are looking at giving our network team the keys to the kingdom (CISCO UCS) but yet they have nearly ZERO understanding and training of how any of it works…. scary right? We have to cross these gaps people we need to make sure that we have people positioned in areas who can understand and impart that training or have someone available as a resource.

My Real Concerns:
vCloud director is something totally new and alien to me when I first stepped in the cloud. I had to learn and quickly. Having my background I quickly go to the manuals, read the blogs, get plugged into good sources, learn even more, read books, and I start auditing. I start looking at designs that may be questionable and start asking the questions of “Is it ignorance” or “What the … was he thinking?” and quickly find that usually it was the latter.. simply ignorance. No one really is to blame because we have to understand YES, it is a NEW technology – BUT how much more critical is it that we research and ensure that we are implementing a design that is “rock” solid before rolling it out… Yes, I know deadlines are deadlines but it is what it is either way. You either spend a lot more money in the long run or spend a little bit more to get it right the first time. We are now having to go back and perform a second phase and for the past couple of months we have been remediating a lot of different things that could’ve been done right had a simple template been designed correctly. We now spend countless additional hours updating and working more issues because of this one simple thing. This isn’t even getting into the storage and other concerns I have.

Cloud and What’s Scary?:
Yeah, I know right scary? I don’t know about yours but some of the ones I have seen are. Here is what scares the heck out of me. ABC customer decides deploy a truck load of Oracle, MSSQL, IIS, Weblogic, and etc Virtual machines all on the fly. Next thing we know we see some latency on the storage back-end and see some impact to performance. Come to find out a bunch of cloning operations are kicking off… I/O is spiking, the VM’s are writing many types of Iops and in a matter of about 12 hours we are having some major issues. This is called “Scalability” or sometimes “Elasticity” whatever you want to call it. Some catalogs host every kind of application and majority of the apps are all tier 1 virtualized workloads. This isn’t the little stuff most corporations virtualize. They usually put this stuff off for later because the need of having a high performance server and old traditional thinking still tells them to not do it (Playing it safe). Scaling a cloud to accommodate tier 1 workloads is going to be something I think we are going to be seeing a lot more. In fact, most vendors provide documentation of implementing solutions on VMware Cloud Director – but they almost NEVER cover the application workloads. I am speaking to Storage, Networking, and Server Hardware. This is probably because in most cases due to the mixed nature you can have in an environment you should do THOROUGH testing to ensure that you can scale out and run an optimal amount of workloads… some would call it vBlock..

Anyways I didn’t mean to write a blog this long but I have just had a lot on my mind lately and I will continue to write more as I continue my VMware Cloud journey.

Cheers,

***Disclaimer: The thoughts and views expressed on VirtualNoob.wordpress.com and Chad King in no way reflect the views or thoughts of his employer or any other views of a company. These are his personal opinions which are formed on his own. Also, products improve over time and some things maybe out of date. Please feel free to contact us and request an update and we will be happy to assist. Thanks!~

Brown Bag Cloud Architecture “Eco-System” Notes

So I checked out the Brown Bag over at Cody’s Site. I have to say I was very happy with the Cloud Architecture.  It definitely was a different spin to learn about the “Eco-System” and also listen to some of the questions.  Here are the notes I gathered:

  • vSphere which was once the management layer is now more of an application layer
  • vSphere administrators may not be vCloud
  • Architecture shows best practice to have two vCenters one for the Cloud Service the other for management.
  • Automation and Orchestration is more important
  • Different security levels for Cloud and vSphere Management
  • Scaling the cloud out brings additional complexity to traditional virtual BU/DR technologies
  • Most scale-outs of the cloud involve more of use case versus meeting the vCenter maximums.
    • BU/DR requirements
    • PCI Compliance (Particular Security Use-Case)
  • VCD Database is still not “officially” supported when running Oracle RAC in “Active/Active” only “Active/Standby” Configurations
  • Templates are simply “powered off” VM in VCD
  • Network Copy happens between Clusters (different storage)
  • Cloning has always been Block-File copy between host (VAAI Helps)
  • For VCD deploying vApps to the assigned tenant datastores the vApps will deploy on datastores that has the least amount of “USED” space
  • VCD requires DRS – NEVER disable it
  • Linked Clones do not have mis-alignment on NFS
  • NFS seems to be gaining more momentum for VCD deployments

The following to me seemed to make a lot of sense.  I attempted to sort it out by what I would think is needed versus highly recommended.  Need is more of vCloud is dependent on that technology.

  • High availability of certain components – applications.
    • vShield Manager > FT enabled (Fault Tolerance)
    • vCenter Server > Heartbeat (More critical because VCD uses vCenter)
  • Components Needed to use in the Cloud Architecture (VMware Specific)
    • vCenter (one for management and the other for cloud)
    • vShield Manager
    • vChargeback
  • Components Highly Recommended:
    • vCloud Service Manager
    • vCenter Update Manager
    • vCloud Connector
    • vCenter Orchestrator
  • Future Products that will be vCloud ready:
    NOTE: Due to the different API’s in each Product, VMware is playing catch up on getting some of these products “vCloud Ready”

    • vCenter Ops
    • Infrastructure Viewer
  • Skill and Knowledge increase is also needed:
    • vSphere / ESX
    • Deeper Storage Skills
    • Deeper Networking & Firewall Skills
    • Scripting (PowerCLI)
    • Workflow / Automation
    • Capacity Planning

Note: Prior it was ESX, vCenter and some scripting
it is also more about Infrastructure Management now

Cell Network Considerations:

  • Network Design of Interfaces:
    • HTTP/Console Proxy (Front-End end-user aka Portal)
    • OS Management
    • Database (Oracle or MS-SQL)
    • NFS (Transfer Service Storage L2 Network with Jumbo Frames)
    • vSphere (L2 Network with Jumbo Frames)

Note: This is not referenced in the architecture but just recommendations. These may require static or additional routes. Traditionally this has been 2 interfaces. Use VLANS if possible. The NFS and vSphere are mostly for the cloning process or Import/Export processes of VCD. This would allow the cloud to be even more scalable and efficient.

Feel Free to comment! For more information visit:
http://professionalvmware.com/2011/11/brownbag-follow-up-vcloud-architecture/

***Disclaimer: The thoughts and views expressed on VirtualNoob.wordpress.com and Chad King in no way reflect the views or thoughts of his employer or any other views of a company. These are his personal opinions which are formed on his own. Also, products improve over time and some things maybe out of date. Please feel free to contact us and request an update and we will be happy to assist. Thanks!~

%d bloggers like this: