Re: Is it possible to increase Ceph Mon store?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



For what it's worth, I think the behaviour Pardhiv and Bryan are describing is not quite normal, and sounds similar to something we see on our large luminous cluster with elderly (created as jewel?) monitors. After large operations which result in the mon stores growing to 20GB+, leaving the cluster with all PGs active+clean for days/weeks will usually not result in compaction, and the store sizes will slowly grow. 

I've played around with restarting monitors with and without mon_compact_on_start set, and using 'ceph tell mon.[id] compact'. For this cluster, I found the most reliable way to trigger a compaction was to restart all monitors daemons, one at a time, *without* compact_on_start set. The stores rapidly compact down to ~1GB in a minute or less after the last mon restarts.

It's worth noting that occasionally (1 out of every 10 times, or fewer) the stores will compact without prompting after all PGs become active+clean. 

I haven't put much time into this as I am planning on reinstalling the monitors to get rocksDB mon stores. If the problem persists with the new monitors I'll have another look at it.

Cheers
Tom

> -----Original Message-----
> From: ceph-users <ceph-users-bounces@xxxxxxxxxxxxxx> On Behalf Of Wido
> den Hollander
> Sent: 08 January 2019 08:28
> To: Pardhiv Karri <meher4india@xxxxxxxxx>; Bryan Stillwell
> <bstillwell@xxxxxxxxxxx>
> Cc: ceph-users <ceph-users@xxxxxxxxxxxxxx>
> Subject: Re:  Is it possible to increase Ceph Mon store?
> 
> 
> 
> On 1/7/19 11:15 PM, Pardhiv Karri wrote:
> > Thank you Bryan, for the information. We have 816 OSDs of size 2TB each.
> > The mon store too big popped up when no rebalancing happened in that
> > month. It is slightly above the 15360 threshold around 15900 or 16100
> > and stayed there for more than a week. We ran the "ceph tell mon.[ID]
> > compact" to get it back earlier this week. Currently the mon store is
> > around 12G on each monitor. If it doesn't grow then I won't change the
> > value but if it grows and gives the warning then I will increase it
> > using "mon_data_size_warn".
> >
> 
> This is normal. The MONs will keep a history of OSDMaps if one or more PGs
> are not active+clean
> 
> They will trim after all the PGs are clean again, nothing to worry about.
> 
> You can increase the setting for the warning, but that will not shrink the
> database.
> 
> Just make sure your monitors have enough free space.
> 
> Wido
> 
> > Thanks,
> > Pardhiv Karri
> >
> >
> >
> > On Mon, Jan 7, 2019 at 1:55 PM Bryan Stillwell <bstillwell@xxxxxxxxxxx
> > <mailto:bstillwell@xxxxxxxxxxx>> wrote:
> >
> >     I believe the option you're looking for is mon_data_size_warn.  The
> >     default is set to 16106127360.____
> >
> >     __ __
> >
> >     I've found that sometimes the mons need a little help getting
> >     started with trimming if you just completed a large expansion.
> >     Earlier today I had a cluster where the mon's data directory was
> >     over 40GB on all the mons.  When I restarted them one at a time with
> >     'mon_compact_on_start = true' set in the '[mon]' section of
> >     ceph.conf, they stayed around 40GB in size.   However, when I was
> >     about to hit send on an email to the list about this very topic, the
> >     warning cleared up and now the data directory is now between 1-3GB
> >     on each of the mons.  This was on a cluster with >1900 OSDs.____
> >
> >     __ __
> >
> >     Bryan____
> >
> >     __ __
> >
> >     *From: *ceph-users <ceph-users-bounces@xxxxxxxxxxxxxx
> >     <mailto:ceph-users-bounces@xxxxxxxxxxxxxx>> on behalf of Pardhiv
> >     Karri <meher4india@xxxxxxxxx <mailto:meher4india@xxxxxxxxx>>
> >     *Date: *Monday, January 7, 2019 at 11:08 AM
> >     *To: *ceph-users <ceph-users@xxxxxxxxxxxxxx
> >     <mailto:ceph-users@xxxxxxxxxxxxxx>>
> >     *Subject: * Is it possible to increase Ceph Mon
> > store?____
> >
> >     __ __
> >
> >     Hi, __ __
> >
> >     __ __
> >
> >     We have a large Ceph cluster (Hammer version). We recently saw its
> >     mon store growing too big > 15GB on all 3 monitors without any
> >     rebalancing happening for quiet sometime. We have compacted the DB
> >     using  "#ceph tell mon.[ID] compact" for now. But is there a way to
> >     increase the size of the mon store to 32GB or something to avoid
> >     getting the Ceph health to warning state due to Mon store growing
> >     too big?____
> >
> >     __ __
> >
> >     -- ____
> >
> >     Thanks,____
> >
> >     *P**ardhiv **K**arri*
> >
> >
> >     ____
> >
> >     __ __
> >
> >
> >
> > --
> > *Pardhiv Karri*
> > "Rise and Rise again untilLAMBSbecome LIONS"
> >
> >
> >
> > _______________________________________________
> > ceph-users mailing list
> > ceph-users@xxxxxxxxxxxxxx
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux