Re: Replicating CephFS between clusters

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 2/19/19 6:28 PM, Marc Roos wrote:
> 
>  >> 
>  >
>  >I'm not saying CephFS snapshots are 100% stable, but for certain
>  >use-cases they can be.
>  >
>  >Try to avoid:
>  >
>  >- Multiple CephFS in same cluster
>  >- Snapshot the root (/)
>  >- Having a lot of snapshots
> 
> How many is a lot? Having a lot of snapshots in total? Or having a lot 
> of snapshots on one dir? I was thinking of applying 7 snapshots on 1500 
> directories.
> 

Ah, yes, good question. I don't know if there is a true upper limit, but
leaving old snapshot around could hurt you when replaying journals and such.

Therefor, if you create a snapshot, rsync and then remove it, it should
be fine.

You were thinking about 1500*7 snapshots?

Wido

>  >Then you could use the cephfs recursive statistics to figure out which
>  >directories have changed and sync their data to another cluster.
>  >
>  >But there are some caveats, but it can work though!
>  >
>  >Wido
>  >
>  >>  
>  >> 
>  >> To be more precise, Id like to be able to replicate data in a
>  >> scheduled, atomic way to another cluster, so if the site hosting our
>  >> primary bitbucket cluster becomes unavailable for some reason, Im 
> able
>  >> to spin up another bitbucket cluster elsewhere.
>  >> 
>  >>  
> 
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux