On 8/30/18 10:28 AM, Dan van der Ster wrote: > Hi, > > Is anyone else seeing rocksdb mon stores slowly growing to >15GB, > eventually triggering the 'mon is using a lot of disk space' warning? > > Since upgrading to luminous, we've seen this happen at least twice. > Each time, we restart all the mons and then stores slowly trim down to > <500MB. We have 'mon compact on start = true', but it's not the > compaction that's shrinking the rockdb's -- the space used seems to > decrease over a few minutes only after *all* mons have been restarted. > > This reminds me of a hammer-era issue where references to trimmed maps > were leaking -- I can't find that bug at the moment, though. > I just saw our message in the other thread and I thought I'd reply here. I have seen this recently as well with Luminous 12.2.8 after a large migration. Cluster grew from ~2000 OSDs to ~2500. Rebalance took about 4 days. After this all the MONs were 15~16GB in size and were issuing a warning. I stopped the MONs and compacted their MON stores using ceph-monstore-tool and started them again, that worked. I'm usually cautions with doing a online compaction as this sometimes hits the MON performance. Not sure yet why this is happening as the MONs should compact during normal operations. Wido > Cheers, Dan > _______________________________________________ > ceph-users mailing list > ceph-users@xxxxxxxxxxxxxx > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com