Re: Snapshot automation/scheduling for rbd?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Right. IIUC, disk snapshots are disabled in the global settings and I believe they also warn you that it can not produce crash-consistent snapshots. I believe the snapshots can be taken like but not sure if a pause or fs freeze is involved.

AFAIK, you'll have to initiate snapshot for each volume regardless root or data.

I've not tried with full VM snapshot which might be helpful in this scenario, but I think it doesn't work either due to the limitations imposed. To track a block device, there is a path parameter in every volume detail on CloudStack similar to this "e93016ae-e572-4b15-8df7-a08c787568d4". This should directly point to <ceph_pool>/<e93016ae-e572-4b15-8df7-a08c787568d4>. If you have access to the Ceph dashboard, you should be able to view those snapshots under the block device in an easier way or you may also use rbd cli.

As a general note, bypassing the Cloud orchestrator and taking snapshots manually using Ceph client tools will likely cause disruption in the cloud orchestrator workflow. For example, if you take snapshots that are unmanaged by cloudstack and try to perform CRUD op on the storage blockdev in WebUI or API, you'll see unexpected errors.

Thanks

________________________________
From: Jeremy Hansen <jeremy@xxxxxxxxxx>
Sent: Monday, February 5, 2024 10:29:06 pm
To: ceph-users@xxxxxxx <ceph-users@xxxxxxx>; Jayanth Reddy <jayanthreddy5666@xxxxxxxxx>
Subject: Re:  Re: Snapshot automation/scheduling for rbd?

Thanks. I think the only issue with doing snapshots via Cloudstack is potentially having to pause an instance for an extended period of time. I haven’t tested this yet but based on the docs, I think kvm has to be paused regardless.

What about added volumes?  Does an instance have to pause of you’re only snapshotting added volumes and not the root disk?

Couple of questions.  If I snapshot an rbd image from the ceph side, does that require an instance pause and is there a graceful way, perhaps through the api to do the full mapping of instance volumes -> Ceph block image named?  So I can understand what block images belong to which Cloudstack instance. I never understood how to properly trace a volume from instance to Ceph image.

Thanks!



On Saturday, Feb 03, 2024 at 10:47 AM, Jayanth Reddy <jayanthreddy5666@xxxxxxxxx<mailto:jayanthreddy5666@xxxxxxxxx>> wrote:
Hi,
For CloudStack with RBD, you should be able to control the snapshot placement using the global setting "snapshot.backup.to.secondary". Setting this to false makes snapshots be placed directly on Ceph instead of secondary storage. See if you can perform recurring snapshots. I know that there are limitations with KVM and disk snapshots but good to give it a try.

Thanks


Get Outlook for Android<https://aka.ms/AAb9ysg>
________________________________
From: Jeremy Hansen <jeremy@xxxxxxxxxx>
Sent: Saturday, February 3, 2024 11:39:19 PM
To: ceph-users@xxxxxxx <ceph-users@xxxxxxx>
Subject:  Re: Snapshot automation/scheduling for rbd?

Am I just off base here or missing something obvious?

Thanks



On Thursday, Feb 01, 2024 at 2:13 AM, Jeremy Hansen <jeremy@xxxxxxxxxx<mailto:jeremy@xxxxxxxxxx>> wrote:
Can rbd image snapshotting be scheduled like CephFS snapshots?  Maybe I missed it in the documentation but it looked like scheduling snapshots wasn’t a feature for block images.  I’m still running Pacific. We’re trying to devise a sufficient backup plan for Cloudstack and other things residing in Ceph.

Thanks.
-jeremy




_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux