On 1/7/19 11:15 PM, Pardhiv Karri wrote: > Thank you Bryan, for the information. We have 816 OSDs of size 2TB each. > The mon store too big popped up when no rebalancing happened in that > month. It is slightly above the 15360 threshold around 15900 or 16100 > and stayed there for more than a week. We ran the "ceph tell mon.[ID] > compact" to get it back earlier this week. Currently the mon store is > around 12G on each monitor. If it doesn't grow then I won't change > the value but if it grows and gives the warning then I will increase it > using "mon_data_size_warn". > This is normal. The MONs will keep a history of OSDMaps if one or more PGs are not active+clean They will trim after all the PGs are clean again, nothing to worry about. You can increase the setting for the warning, but that will not shrink the database. Just make sure your monitors have enough free space. Wido > Thanks, > Pardhiv Karri > > > > On Mon, Jan 7, 2019 at 1:55 PM Bryan Stillwell <bstillwell@xxxxxxxxxxx > <mailto:bstillwell@xxxxxxxxxxx>> wrote: > > I believe the option you're looking for is mon_data_size_warn. The > default is set to 16106127360.____ > > __ __ > > I've found that sometimes the mons need a little help getting > started with trimming if you just completed a large expansion. > Earlier today I had a cluster where the mon's data directory was > over 40GB on all the mons. When I restarted them one at a time with > 'mon_compact_on_start = true' set in the '[mon]' section of > ceph.conf, they stayed around 40GB in size. However, when I was > about to hit send on an email to the list about this very topic, the > warning cleared up and now the data directory is now between 1-3GB > on each of the mons. This was on a cluster with >1900 OSDs.____ > > __ __ > > Bryan____ > > __ __ > > *From: *ceph-users <ceph-users-bounces@xxxxxxxxxxxxxx > <mailto:ceph-users-bounces@xxxxxxxxxxxxxx>> on behalf of Pardhiv > Karri <meher4india@xxxxxxxxx <mailto:meher4india@xxxxxxxxx>> > *Date: *Monday, January 7, 2019 at 11:08 AM > *To: *ceph-users <ceph-users@xxxxxxxxxxxxxx > <mailto:ceph-users@xxxxxxxxxxxxxx>> > *Subject: * Is it possible to increase Ceph Mon store?____ > > __ __ > > Hi, __ __ > > __ __ > > We have a large Ceph cluster (Hammer version). We recently saw its > mon store growing too big > 15GB on all 3 monitors without any > rebalancing happening for quiet sometime. We have compacted the DB > using "#ceph tell mon.[ID] compact" for now. But is there a way to > increase the size of the mon store to 32GB or something to avoid > getting the Ceph health to warning state due to Mon store growing > too big?____ > > __ __ > > -- ____ > > Thanks,____ > > *P**ardhiv **K**arri* > > > ____ > > __ __ > > > > -- > *Pardhiv Karri* > "Rise and Rise again untilLAMBSbecome LIONS" > > > > _______________________________________________ > ceph-users mailing list > ceph-users@xxxxxxxxxxxxxx > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com