Thanks. I think the only issue with doing snapshots via Cloudstack is potentially having to pause an instance for an extended period of time. I haven’t tested this yet but based on the docs, I think kvm has to be paused regardless.
What about added volumes? Does an instance have to pause of you’re only snapshotting added volumes and not the root disk?
Couple of questions. If I snapshot an rbd image from the ceph side, does that require an instance pause and is there a graceful way, perhaps through the api to do the full mapping of instance volumes -> Ceph block image named? So I can understand what block images belong to which Cloudstack instance. I never understood how to properly trace a volume from instance to Ceph image.
Thanks!
On Saturday, Feb 03, 2024 at 10:47 AM, Jayanth Reddy <jayanthreddy5666@xxxxxxxxx> wrote:Hi,For CloudStack with RBD, you should be able to control the snapshot placement using the global setting "snapshot.backup.to.secondary". Setting this to false makes snapshots be placed directly on Ceph instead of secondary storage. See if you can perform recurring snapshots. I know that there are limitations with KVM and disk snapshots but good to give it a try.
Thanks
From: Jeremy Hansen <jeremy@xxxxxxxxxx>
Sent: Saturday, February 3, 2024 11:39:19 PM
To: ceph-users@xxxxxxx <ceph-users@xxxxxxx>
Subject: [ceph-users] Re: Snapshot automation/scheduling for rbd?Am I just off base here or missing something obvious?
Thanks
On Thursday, Feb 01, 2024 at 2:13 AM, Jeremy Hansen <jeremy@xxxxxxxxxx> wrote:
Can rbd image snapshotting be scheduled like CephFS snapshots? Maybe I missed it in the documentation but it looked like scheduling snapshots wasn’t a feature for block images. I’m still running Pacific. We’re trying to devise a sufficient backup plan for Cloudstack and other things residing in Ceph.
Thanks.-jeremy
Attachment:
signature.asc
Description: PGP signature
_______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx