VMware Backup CommVault – Features (Simpana 9)
As some of you may or may not know I have been working with CommVault. Here is High Level overview of some features. (IF any notes are not accurate please contact me for an update. This version was based on just plain Simpana 9. This is not the latest release of the product which may be improved or different.)
CommVault Simpana 9
Architectural – Informational
1. ComCell –The Commserv, the MediaAgent and the Clients
- ComServe servers are the same thing as ComCells (Stand alone or distributed) and is a way to separate data with a physical boundary:
- Is a Master server and media agents
2. ComCell can support more than 5000 Physical Servers depending on configuration
3. You can define groups and roles granularly and use AD to assist this.
4. Can use multiple AD authentications and connections.
5. Is a distributed model and can involve multiple media agents.
6. Scaling involves using media agents.
7. Media agents can have certain licensed options on it.
8. Using multiple media agents (if licensed the same) can do balancing between the two based off current utilization. Meaning some backup operations may run on Media agents that are not under load. This can ensure backups run without being impacted by performance or load.
9. Distributed model allows for a certain level of High Availability allowing other media agents to pick up the slack if one goes down.
- Require SQL server
- For mounting NFS backup media it requires a Linux based media agent
Encryption – Set on the media agent properties under “Encryption”
- From Host to Media Agent Encryption
- Various encryption types
- Encryption of data in flight (only data handled by CommVault)
- Encrypt to disk > Dedupe Data to disk can be encrypted
- Encrypt after moving from disk to other backup target
- Uses Keys to only allow Customer data to be read by only the customer
- CommVault is the only solution that can do data – dedupe encryption. (verify?)
- Properties of the media agent to set encryption (Right click the media agent)
- Encryption Direct Media Access can be a caveat depending on how you want to be able to access the encrypted data. Password protected no Password, or using keys…
- Some customers tapes use TKLM (so no need
- Data Domain Media targets present particular challenges
- Encryption with CommVault gives you a lot of granular control (from the looks)
Storage Policy – How it handles the data being written to the media.
- You can set up various polices per client to separate the data being backed up.
- Data Streams –
- Can be set to more or less (default is 30
- When Data streams are filled jobs go into pending
- You can specify different data pools for different policies (Backup media types)
- At the application layer CommVault can open up more data pipes and targets to disk/readers for streams – this means that CommVault does produce one of the best through puts when moving data.
- Data archiving and retention offers a wide-array of options
- Data Deduplication
- Dedupe ratio on “first pass full image level” was all over the place.
- Is an option that can be set individually on each storage polic
- Can create a new DeDeup data base with each polic
- Database – is only a check in basis and can be set on a “client” level. This means that a client can have their own dedicated dedupe Database
- If dedupe is gone we can still restore data based off metadata in the backup data (How CommVault is designed and backups up data
- You can recover a dedupe database and CommVault can back it up
- Dedupe to tape – Price per performance conversatio
- Reduces Cost of Tapes in general per Yea
- Requirements should be define
- Writes in volume
- All tape writes are sequentia
- All data has to be staged to disk in order for CommVault to read that dat
- customer requirements may be only “tape” backup
- RTO and RPO becomes a talking point
- Hybrid design of CommVault allows you to mix and match different types of backup data. This gives you a lot of granular control of your backup policies. (Strong Plus)
- Properties of Storage Policies:
- Check Copies > Right Click Storage Policy > Select Copy Precedence
- Highlight a storage policy in the right pane is the Copy of the Data in that policy.
- Copy to disk
- Copy to tape
- Backup Jobs associated
- Dedupe rate
- Alerting for Aux Copy (i.e. basically when you move data from disk to tape)
- Storage policies control the retention of backup jobs – NOTE: This is not handled in the schedules
- Synthetic Full – which means it’s compiled all the incremental and the full backups along with Deduplication. This allows quick recovery and eventually only utilizes incremental backups to perform restores. Also this is a very fast backup.
- Gets ACL’s from the very first backu
- Takes all disk signatures and creates a synthetic from and the incremental after. The SF merges the Full/Incremental backups and avoid the re-compiling allowing for faster backups
- You are not getting the original backup – or ACL’s afterwards.
- A different conversation with Tapes
- Adding another copy (retention/archival so to speak):
- To add a new copy of the storage policy just right click the storage policy > go to all task > new cop
- With each copy you can define different properties on how that copy treats the data. Again giving you a large amount of granular control over retentio
- You can also define a new schedule as wel
- If something occurs with the fabric or SAN and you have a separate copy that is not affected by that failure you can swap the copies to allow the backup to still run if the primary backup media was down or not available.
- You will still have to configure associations (backup jobs)
Backup Jobs –
- Data Readers – These are how many VM you can essentially backup at the same time.
- Can be changed up and down.
- Different backup sets allow you to separate physical boundaries and can sometimes backup or touch the data more than once.
- Defining subclient within the backup job allows you to separate backup objects without touching the same data multiple times.
- Subclient is essentially the backup objects or targets.
- You can associate your Backup set with certain Backup Policies
- Backup Types
3. File Level
- Network Agents are how many processes it uses to move that data
- This affects streams on the back-end – (Network Agents times Data readers = Data streams)
- Dedupe at:
2. Media Agent
- Can define times
2. Can associate backup jobs
3. Can define full backups
4. Can Define incremental backups
Streaming Backups – Media Agents
- Are essentially data movers
- Media Agents can share different types of storage and can be load balanced
- Fault tolerance can be used for media agent streaming
- Intelligent – Load Balancing
1. For individual file restores an agent may come into play if you don’t have permission
- Impersonate User is an option
2. Use an agent that is already authenticated
3. ACL/Permissions restoration is optional
- Restoring an image VM
- By Subclient is an in place restore
2. By Left pane is granular restore
- Support for vCloud Director – (I don’t see this not being addressed)
- Manual import into vCloud Director
2. vSphere restore automated VCD requires an admin to be involved
- User security MUST have a defined email address (User Properties) to use SMTP
- Must be the user on the local machine/domain.
- Can be A La Carte (Line Item)
- Capacity Licensing (You get everything they offer but pay by the TB)
- This can be good because you can factor DR cost into the Per GB/TB a client uses or enterprise.
2. Can be quite expensive
- Proof of Concept requires negotiating with CV.
- I do believe the VCD GAP will be addressed in the coming days. I cannot foresee this not being addressed.
***Disclaimer: The thoughts and views expressed on VirtualNoob.wordpress.com and Chad King in no way reflect the views or thoughts of his employer or any other views of a company. These are his personal opinions which are formed on his own. Also, products improve over time and some things maybe out of date. Please feel free to contact us and request an update and we will be happy to assist. Thanks!~