>> > >I'm not saying CephFS snapshots are 100% stable, but for certain >use-cases they can be. > >Try to avoid: > >- Multiple CephFS in same cluster >- Snapshot the root (/) >- Having a lot of snapshots How many is a lot? Having a lot of snapshots in total? Or having a lot of snapshots on one dir? I was thinking of applying 7 snapshots on 1500 directories. >Then you could use the cephfs recursive statistics to figure out which >directories have changed and sync their data to another cluster. > >But there are some caveats, but it can work though! > >Wido > >> >> >> To be more precise, Id like to be able to replicate data in a >> scheduled, atomic way to another cluster, so if the site hosting our >> primary bitbucket cluster becomes unavailable for some reason, Im able >> to spin up another bitbucket cluster elsewhere. >> >> _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com