Hi cephers. Recently we are upgrading ceph cluster from mimic to nautilus. We have 5 ranks and decrease max_mds from 5 down to 1 smoothly. When we set max_mds from 2 to 1, the cluster show that rank 1 is failed, those are mds logs: 2021-03-12 16:21:26.974 7f366e949700 1 mds.1.125077 handle_mds_map state change up:boot --> up:replay 2021-03-12 16:21:26.974 7f366e949700 1 mds.1.125077 replay_start 2021-03-12 16:21:26.974 7f366e949700 1 mds.1.125077 recovery set is 0 2021-03-12 16:21:26.974 7f366e949700 1 mds.1.125077 waiting for osdmap 460461 (which blacklists prior instance) 2021-03-12 16:21:27.018 7f366893d700 0 mds.1.cache creating system inode with ino:0x101 2021-03-12 16:21:27.019 7f366893d700 0 mds.1.cache creating system inode with ino:0x1 2021-03-12 16:21:27.404 7f366713a700 0 mds.1.cache creating system inode with ino:0x100 2021-03-12 16:21:27.407 7f366713a700 -1 log_channel(cluster) log [ERR] : client client1: (2934972)loaded with preallocated inodes that are inconsistent with inotable 2021-03-12 16:21:27.407 7f366713a700 -1 log_channel(cluster) log [ERR] : client client2: (2862164)loaded with preallocated inodes that are inconsistent with inotable 2021-03-12 16:21:27.407 7f366713a700 -1 log_channel(cluster) log [ERR] : client client3: (2579839)loaded with preallocated inodes that are inconsistent with inotable 2021-03-12 16:21:27.407 7f366713a700 -1 log_channel(cluster) log [ERR] : client client4: (2579815)loaded with preallocated inodes that are inconsistent with inotable ... thanks for any help! _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx