Hi Alexandre,
Thanks for the link! Unless I'm misunderstanding, this is to replicate an RBD volume from one cluster to another. What if I just wanted to back up a running cluster without having another cluster to replicate to? i.e. I'd ideally like a tarball of raw files that I could extract on a new host, start the Ceph daemons, and get up and running.
Chris Armstrong
Head of Services
OpDemand / Deis.io
GitHub: https://github.com/deis/deis -- Docs: http://docs.deis.io/
On Wed, Nov 5, 2014 at 1:04 AM, Alexandre DERUMIER <aderumier@xxxxxxxxx> wrote:
>>Is RBD snapshotting what I'm looking for? Is this even possible?
Yes, you can use rbd snapshoting, export / import
http://ceph.com/dev-notes/incremental-snapshots-with-rbd/
But you need to do it for each rbd volume.
Here a script to do it:
http://www.rapide.nl/blog/item/ceph_-_rbd_replication
(AFAIK it's not possible to do it at pool level)
----- Mail original -----
De: "Christopher Armstrong" <chris@xxxxxxxxxxxx>
À: ceph-users@xxxxxxxxxxxxxx
Envoyé: Mercredi 5 Novembre 2014 08:52:31
Objet: Full backup/restore of Ceph cluster?
_______________________________________________
Hi folks,
I was wondering if anyone has a solution for performing a complete backup and restore of a CEph cluster. A Google search came up with some articles/blog posts, some of which are old, and I don't really have a great idea of the feasibility of this.
Here's what I've found:
http://ceph.com/community/blog/tag/backup/
http://ceph.com/docs/giant/rbd/rbd-snapshot/
http://t3491.file-systems-ceph-user.file-systemstalk.us/backups-t3491.html
Is RBD snapshotting what I'm looking for? Is this even possible? Any info is much appreciated!
Thanks,
Chris
Chris Armstrong
Head of Services
OpDemand / Deis.io
GitHub: https://github.com/deis/deis -- Docs: http://docs.deis.io/
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com