Re: Upgrade tips from Luminous to Nautilus?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I don't have a Luminous cluster at hand right now but setting max_mds to 1 already should take care and stop MDS services. Do you have have pinning enabled (subdirectories pinned to a specific MDS)?


Zitat von Mark Schouten <mark@xxxxxxxx>:

On Thu, Apr 29, 2021 at 10:58:15AM +0200, Mark Schouten wrote:
We've done our fair share of Ceph cluster upgrades since Hammer, and
have not seen much problems with them. I'm now at the point that I have
to upgrade a rather large cluster running Luminous and I would like to
hear from other users if they have experiences with issues I can expect
so that I can anticipate on them beforehand.


Thanks for the replies!

Just one question though. Step one for me was to lower max_mds to one.
Documentation seems to suggest that the cluster automagically moves > 1
mds'es to a standby state. However, nothing really happens.

root@osdnode01:~# ceph fs get dadup_pmrb | grep max_mds
max_mds 1

I still have three active ranks. Do I simply restart two of the MDS'es
and force max_mds to one daemon, or is there a nicer way to move two
mds'es from active to standby?

Thanks again!

--
Mark Schouten     | Tuxis B.V.
KvK: 74698818     | http://www.tuxis.nl/
T: +31 318 200208 | info@xxxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux