CephFS, being a POSIX filesystem, can be backed up with traditional tools. Given it's potential size though, it can get difficult. Given CephFS's scalability, it can be divided up. Assign multiple clients, and have each client backup a portion of the directory tree. It also depends a lot on which tools and backups you're planning to do. Massive on site tape library? rsync to a remote cluster? The high level still looks the same, but the details change a lot. CephFS's metadata rollup has the potential to make delta backups extremely cheap, but I don't know of any tools that would take advantage of it. As a thought exercise, it seems relatively easy to hack up rsync to handle it efficiently. On Tue, Sep 23, 2014 at 3:51 AM, Andrei Mikhailovsky <andrei at arhont.com> wrote: > Luis, > > you may want to take a look at rbd export/import and export-diff > import-diff functionality. this could be used to copy data to another > cluster or offsite. > > S3 has regions, which you could use for async replication. > > Not sure how the cephfs work for backups. > > Andrei > ------------------------------ > > *From: *"Luis Periquito" <periquito at gmail.com> > *To: *ceph-users at lists.ceph.com > *Sent: *Tuesday, 23 September, 2014 11:28:39 AM > *Subject: *ceph backups > > > Hi fellow cephers, > > I'm being asked questions around our backup of ceph, mainly due to data > deletion. > > We are currently using ceph to store RBD, S3 and eventually cephFS; and we > would like to be able to devise a plan to backup the information as to > avoid issues with data being deleted from the cluster. > > I know RBD has the snapshots, but how can they be automated? Can we rely > on them to perform data recovery? > > And for S3/CephFS? Are there any backup methods? Other than copying all > the information into another location? > > thanks, > Luis > > _______________________________________________ > ceph-users mailing list > ceph-users at lists.ceph.com > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > > > > _______________________________________________ > ceph-users mailing list > ceph-users at lists.ceph.com > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > > -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20140924/02f2050a/attachment-0001.htm>