On 04/27/2018 08:31 PM, David Turner wrote: > I'm assuming that the "very bad move" means that you have some PGs not > in active+clean. Any non-active+clean PG will prevent your mons from > being able to compact their db store. This is by design so that if > something were to happen where the data on some of the copies of the PG > were lost and gone forever the mons could do their best to enable the > cluster to reconstruct the PG knowing when OSDs went down/up, when PGs > moved to new locations, etc. > > Thankfully there isn't a way around this. Something you can do is stop > a mon, move the /var/lib/mon/$(hostname -s)/ folder to a new disk with > more space, set it to mount in the proper location, and start it back > up. You would want to do this for each mon to give them more room for > the mon store to grow. Make sure to give the mon plenty of time to get > back up into the quorum before moving on to the next one. > Indeed. This is a unknown thing with Monitors for a lot of people. I always suggest installing a >200GB DC-grade SSD in Monitors to make sure you can make large movements without running into trouble with the MONs. So yes, move this data to a new disk. Without all PGs active+clean you can't trim the store. > On Wed, Apr 25, 2018 at 10:25 AM Luis Periquito <periquito@xxxxxxxxx > <mailto:periquito@xxxxxxxxx>> wrote: > > Hi all, > > we have a (really) big cluster that's ongoing a very bad move and the > monitor database is growing at an alarming rate. > > The cluster is running jewel (10.2.7) and is there any way to trim the > monitor database before it gets HEALTH_OK? > > I've searched and so far only found people saying not really, but just > wanted a final sanity check... > > thanks, > _______________________________________________ > ceph-users mailing list > ceph-users@xxxxxxxxxxxxxx <mailto:ceph-users@xxxxxxxxxxxxxx> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > > > > _______________________________________________ > ceph-users mailing list > ceph-users@xxxxxxxxxxxxxx > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com