This has changed in Octopus. The above config variables are removed.
Instead, follow this procedure.:
https://docs.ceph.com/docs/octopus/cephfs/standby/#configuring-mds-file-system-affinity
Thanks for the clarification, IIRC I had troubles applying the
mds_standby settings in Nautilus already, but I haven't verified yet
so I didn't mention that in my response. I'll take another look at it.
Zitat von Patrick Donnelly <pdonnell@xxxxxxxxxx>:
On Mon, Mar 30, 2020 at 11:57 PM Eugen Block <eblock@xxxxxx> wrote:
For the standby daemon you have to be aware of this:
> By default, if none of these settings are used, all MDS daemons
> which do not hold a rank will
> be used as 'standbys' for any rank.
> [...]
> When a daemon has entered the standby replay state, it will only be
> used as a standby for
> the rank that it is following. If another rank fails, this standby
> replay daemon will not be
> used as a replacement, even if no other standbys are available.
Some of the mentioned settings are for example:
mds_standby_for_rank
mds_standby_for_name
mds_standby_for_fscid
The easiest way is to have one standby daemon per CephFS and let them
handle the failover.
This has changed in Octopus. The above config variables are removed.
Instead, follow this procedure.:
https://docs.ceph.com/docs/octopus/cephfs/standby/#configuring-mds-file-system-affinity
--
Patrick Donnelly, Ph.D.
He / Him / His
Senior Software Engineer
Red Hat Sunnyvale, CA
GPG: 19F28A586F808C2402351B93C3301A3E258DD79D
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx