Hi Jan-Philipp, I've noticed this a couple of times on Nautilus after doing some large backfill operations. It seems the osd map doesn't clear properly after the cluster returns to Health OK and builds up on the mons. I do a "du" on the mon folder e.g. du -shx /var/lib/ceph/mon/ and this shows several GB of data. I give all my mgrs and mons a restart and after a few minutes I can see this osd map data getting purged from the mons. After a while it should be back to a few hundred MB (depending on cluster size). This may not be the problem in your case, but an easy thing to try. Note, if your cluster is being held in Warning or Error by something this can also explain the osd maps not clearing. Make sure you get the cluster back to health OK first. Rich On Wed, 9 Jun 2021 at 08:29, Jan-Philipp Litza <jpl@xxxxxxxxx> wrote: > > Hi everyone, > > recently I'm noticing that starting OSDs for the first time takes ages > (like, more than an hour) before they are even picked up by the monitors > as "up" and start backfilling. I'm not entirely sure if this is a new > phenomenon or if it always was that way. Either way, I'd like to > understand why. > > When I execute `ceph daemon osd.X status`, it says "state: preboot" and > I can see the "newest_map" increase slowly. Apparently, a new OSD > doesn't fetch the latest OSD map and gets to work, but instead fetches > hundreds of thousands of OSD maps from the mon, burning CPU while > parsing them. > > I wasn't able to find any good documentation on the OSDMap, in > particular why its historical versions need to be kept and why the OSD > seemingly needs so many of them. Can anybody point me in the right > direction? Or is something wrong with my cluster? > > Best regards, > Jan-Philipp Litza > _______________________________________________ > ceph-users mailing list -- ceph-users@xxxxxxx > To unsubscribe send an email to ceph-users-leave@xxxxxxx _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx