Re: Multiple CephFS creation

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Mar 30, 2020 at 11:57 PM Eugen Block <eblock@xxxxxx> wrote:
> For the standby daemon you have to be aware of this:
>
> > By default, if none of these settings are used, all MDS daemons
> > which do not hold a rank will
> > be used as 'standbys' for any rank.
> > [...]
> > When a daemon has entered the standby replay state, it will only be
> > used as a standby for
> > the rank that it is following. If another rank fails, this standby
> > replay daemon will not be
> > used as a replacement, even if no other standbys are available.
>
> Some of the mentioned settings are for example:
>
> mds_standby_for_rank
> mds_standby_for_name
> mds_standby_for_fscid
>
> The easiest way is to have one standby daemon per CephFS and let them
> handle the failover.

This has changed in Octopus. The above config variables are removed.
Instead, follow this procedure.:

https://docs.ceph.com/docs/octopus/cephfs/standby/#configuring-mds-file-system-affinity

-- 
Patrick Donnelly, Ph.D.
He / Him / His
Senior Software Engineer
Red Hat Sunnyvale, CA
GPG: 19F28A586F808C2402351B93C3301A3E258DD79D
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux