Re: CephFS multi active MDS high availability

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



see https://docs.ceph.com/en/pacific/cephfs/multimds/

If I understand it, do this:

ceph fs set <fs_name> max_mds 2
ceph fs set <fs_name> standby_count_wanted 1
ceph orch apply mds <fs_name> 3

Am So., 24. Okt. 2021 um 09:52 Uhr schrieb huxiaoyu@xxxxxxxxxxxx <
huxiaoyu@xxxxxxxxxxxx>:

> Dear Cephers,
>
> When setting up multiple active CephFS MDS, how to make these MDS high
> available? i.e. whenever there is failed MDS, another MDS would quickly
> take over. Does it mean that for N active MDS, I need to set up N standby
> MDS, and make one standby MDS associated with one active MDS?
>
> What would be the best practice for high availability with multiple active
> MDS?
>
> best regards,
>
> samuel
>
>
>
> huxiaoyu@xxxxxxxxxxxx
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux