Re: CephFS active-active

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Isaiah,

A simple solution for multi-site redundancy is to have two nearby sites with < 3ms latency and setup crush map [0] for datacenter level redundancy instead of the default host level.

Performance was adequate in my testing for large number of small files if the latency between all nodes were kept below 3 ms. Of course it also depends on your application.

Ceph fs snapshot mirroring is asynchronous, so your application would need to handle the logic of switching to the replica node, operating with some missing data in a degraded state, synchronizing changes back to primary after it comes online and switching back. Too complicated IMHO.

[0]: https://docs.ceph.com/en/quincy/rados/operations/crush-map/

Kind regards,
Pavin Joseph.

On 28-Dec-22 11:27 AM, Isaiah Tang Yue Shun wrote:
Hi all,

 From the documentation, I can only find Ceph Object Gateway multi-site implementation. I wonder is it if we are using CephFS, how can we achieve active-active setup for production?

Any input is appreciated.

Thanks.

Regards,
Isaiah Tang
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux