Re: MON store.db keeps growing with Octopus

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



in the meanwhile, you can turn on compression in your mon's rocksdb
tunables and make things slightly less scary, something like:
mon_rocksdb_options =
write_buffer_size=33554432,compression=kLZ4Compression,level_compaction_dynamic_level_bytes=true,bottommost_compression=kLZ4HCCompression,max_background_jobs=4,max_subcompactions=2

On Sat, Jul 11, 2020 at 12:10 AM Peter Woodman <peter@xxxxxxxxxxxx> wrote:

> are you running the ceph insights mgr plugin? i was, and my cluster did
> this on rebalance. turned it off, it's fine.
>
> On Fri, Jul 10, 2020 at 5:17 PM Michael Fladischer <michael@xxxxxxxx>
> wrote:
>
>> Hi,
>>
>> our cluster is on Octopus 15.2.4. We noticed that our MON all ran out of
>> space yesterday because the store.db folder kept growing until it filled
>> up the filesystem. We added more space to the MON nodes but store.db
>> keeps growing.
>>
>> Right now it's ~220GiB on the two MON nodes that are active. We shut
>> down on MON node when it hit ~98GiB and it seems that it trimmed its
>> local store.db down to 102MiB and now also keeps growing again.
>>
>> Checking the keys in store.db while the MON is offline shows a lot of
>> "logm" and "osdmap" keys:
>>
>> ceph-monstore-tool <path> dump-keys|awk '{print $1}'|uniq -c
>>       86 auth
>>        2 config
>>       11 health
>>   275929 logm
>>       55 mds_health
>>        1 mds_metadata
>>      602 mdsmap
>>      599 mgr
>>        1 mgr_command_descs
>>        3 mgr_metadata
>>      209 mgrstat
>>      461 mon_config_key
>>        1 mon_sync
>>        7 monitor
>>        1 monitor_store
>>        7 monmap
>>      454 osd_metadata
>>        1 osd_pg_creating
>>     4804 osd_snap
>>   138366 osdmap
>>      538 paxos
>>        5 pgmap
>>
>> I already tried compacting it with "ceph tell ..." and
>> "ceph-monstore-tool <path> compact" but it stayed the same size. Also
>> copying it with "ceph-monstore-tool <path> store-copy <new-path>" just
>> created a copy of the same size.
>>
>> Out cluster is currently in WARN status because we are low on space and
>> several OSDs are in a backfill_full state. Could this be related?
>>
>> Regards,
>> Michael
>> _______________________________________________
>> ceph-users mailing list -- ceph-users@xxxxxxx
>> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>>
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux