Build a Commvault Lab on AWS - Tutorial Part 2 of 2
Part 1 was more focused around deploying the Commvault instance on AWS. If you are familiar with deploying from the marketplace, you should have no issues following Part 2. Just in case you have missed Part 1, I have left you guys the link below.
Build a Commvault Lab on AWS - Tutorial Part 1 of 2
In Part 2, now that you have seen how simple it is to deploy a Commvault environment, lets do some basic configurations to get you started protecting your first instance. We will also setup some Cloud Storage like Amazon S3 to land your copies on as well.
Lets get cracking!
Step 1. Assuming you have stopped the Commserve instance, its a good time to power it up now. If you RDP in, you will see a PowerCLI screen loading the required modules. It does take a couple of minutes, so go grab yourself a cup of coffee, or spin up a few Windows EC2 instances now (as dummy clients) to test backups later.
Step 2. Click on Start > Commvault > Commvault Command Center. This will bring up the Guided Setup window. The first thing you will need to configure is to create a new account. Input an email address and password to move forward.
Step 3. Login to the console as per your earlier configured username and password.
Step 4. The next step will guide you thru the mandatory initial setup. Essentially, Commvault will prompt you to configure a Storage Pool and a Server Backup Plan. A storage pool is essentially a storage location where you will land and store your backups, while a server backup plan is as the name suggest, what would a backup look like to protect your servers (full/incremental, frequency, RPO and etc).
You can skip this step, but probably makes sense to just follow the guided step to create your first plan and storage pool. Click “Let’s get started”.
Step 5. As you can see, there are a couple of storage options you can pick to be part of the pool. Commvault is very flexible as to where you would like to send your backups to. From cloud storage to disk targets to Hyperscale to Metallic.
Just incase you are not aware of Hyperscale, it is Commvault’s backup HCI appliance and Metallic is Commvault’s SaaS based Cloud Storage.
Anyway, for today’s configuration I will focus on Disk Storage. If you have followed from Part 1, I created a disk volume in Step 9 for us to store our backups on. I have it attached to this Commserve server and have formatted it to be D:\ (yes i skipped the step on formatting drives in Windows, which I think you guys would know :))
I will configure it as follow.
Name : Primary Target Storage
MediaAgent : <name of the Commserve>
Type : Local Disk
Backup Location : D:
Use Deduplication : Off (for this example, I will DISABLE deduplication. If you choose to enable it, as per the screenshot below, you will need to pick another location where the Dedupe DB (DDB) will need to be stored. Often time, it is recommended for this DDB be stored on a fast drive)
Hit “Save”
Step 6. The next screen will create a protection plan. I have changed the plan name to “Production Server Plan”, but for ease of simplicity I have left everything else as default.
Step 7. You should now land on the main dashboard, all ready to configure some jobs!
Step 8. Click on “Virtualization”, and we will now have it connected to the Hypervisors (in this case Amazon). Hopefully, you will have your Access Keys / Secret Keys ready as well. If you are not sure where to get these check out this link.
Select Vendor : Amazon
Client Name : Amazon
Regions : <blank>
Authentication : Access and secret key
Access & Secret Keys : <xxxxx>
Access Nodes : Commserve
Step 9. Next, we will create the group of VM’s we want the plan to be associated with. You can give an easy name to it, like “Production VM”, then you can filter the VM’s either by region, instance type, tags or zone. In this case, I have just filtered it by region and have expanded the Singapore region as an example.
As you can see, I have 3 x VM’s that I can protect. So I have conveniently just check-boxed all 3.
As we only have 1 plan configured for now, it defaults to Production Server plan for now. Leave that as default and click “Finish” when ready.
Step 10. Upon completion, you will be redirected to the Active Job’s tab where you will see the first backup run.
Step 11 (optional). Now that you are done creating your first backup job. That wasn’t complex at all right!
As discussed earlier, I will now show you how you can create and add an additional S3 target storage to your storage pool. Given we are on Amazon, it makes sense to leverage the S3 on AWS.
Search for “S3” on the Amazon dashboard search and navigate to the S3 services page. When you get there, click the orange "Create Bucket” on the top right corner.
Step 12. You will need to give your bucket a unique name. When I say unique, its unique across AWS globally. I’ve gone with “charlescvbucket”. Region wise, as always its up to you but for me its Singapore. For everything else, I have gone with the default for now.
Click “Create Bucket”
Step 13. Head back to the Commvault Command Center, and on the Navigation pane on the left, select Storage > Cloud. Click on “Add” on the top right corner.
Step 14. You will be presented with 2 options. As explained earlier Metallic is a Storage-as-a-Service solution by Commvault that is pretty unique (maybe I’ll cover in future posts), and other cloud storage. Pick Cloud Storage.
Step 15. Fill in the required fields. Might be quite a few, but don’t worry its fairly straight forward.
Name : Amazon S3
Type : Amazon S3
MediaAgent : Commserve
Service Host : s3.[region].amazonaws.com
Authentication : Access and secret keys
Credentials : You can create your own credential profile here
Bucket : charlescvbucket
Storage class : Standard
Use Deduplication : Disable (if you enable, you will need to provide a DDB location like before)
Step 16. Now lets extend the earlier Production Server plan to stage long term Full Copies to AWS S3. Go to the left Navigation panel, Manage > Plans > Production Server plan
Step 17. Look up the “Backup Destinations” card, and you should see how this plan is currently configured. Click Add > Copy
Step 18. A new card will pop-up and prompt for details of the new target copy. Give it a useful name, and you should see Primary and the earlier configured S3 in the Storage pulldown. As for backups to copy, you will have the option to move Full backups, weekly fulls, daily fulls and etc from the pull down if you prefer. Similarly, for retention you can change it accordingly.
For this tutorial, I have configured as per below.
Name : Long term copy
Storage : Amazon S3
Source : Primary
Backups to copy : Monthly Fulls
Retention Rules : 1 month
That concludes Part 2 of this tutorial. In such a short time, you now have a working backup and data protection solution staging off to a S3 target as well. You can use this Commvault instance to not only backup your AWS instance, but the same Commserve can be used to connect to Azure or On-Premise VMware environments and have them backed up either to AWS or locally. Of course, you will need to setup some connectivity between them.
Hopefully, this has been useful to get you guys started. Till next time!