Thanks for this. Still on Nautilus here because this is a Proxmox cluster but good for folks tracking master to know. J On Tue, Mar 31, 2020, 3:14 AM Patrick Donnelly <pdonnell@xxxxxxxxxx> wrote: > On Mon, Mar 30, 2020 at 11:57 PM Eugen Block <eblock@xxxxxx> wrote: > > For the standby daemon you have to be aware of this: > > > > > By default, if none of these settings are used, all MDS daemons > > > which do not hold a rank will > > > be used as 'standbys' for any rank. > > > [...] > > > When a daemon has entered the standby replay state, it will only be > > > used as a standby for > > > the rank that it is following. If another rank fails, this standby > > > replay daemon will not be > > > used as a replacement, even if no other standbys are available. > > > > Some of the mentioned settings are for example: > > > > mds_standby_for_rank > > mds_standby_for_name > > mds_standby_for_fscid > > > > The easiest way is to have one standby daemon per CephFS and let them > > handle the failover. > > This has changed in Octopus. The above config variables are removed. > Instead, follow this procedure.: > > > https://docs.ceph.com/docs/octopus/cephfs/standby/#configuring-mds-file-system-affinity > > -- > Patrick Donnelly, Ph.D. > He / Him / His > Senior Software Engineer > Red Hat Sunnyvale, CA > GPG: 19F28A586F808C2402351B93C3301A3E258DD79D > _______________________________________________ > ceph-users mailing list -- ceph-users@xxxxxxx > To unsubscribe send an email to ceph-users-leave@xxxxxxx > _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx