Re: mon.mon01 store is getting too big! 18119 MB >= 15360 MB -- 94% avail

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



If you do have a large enough drive on all of your mons (and always intend to do so) you can increase the mon store warning threshold in the config file so that it no longer warns at 15360 MB.


David Turner | Cloud Operations Engineer | StorageCraft Technology Corporation
380 Data Drive Suite 300 | Draper | Utah | 84020
Office: 801.871.2760 | Mobile: 385.224.2943


If you are not the intended recipient of this message or received it erroneously, please notify the sender and delete it, together with any attachments, and be advised that any dissemination or copying of this message is prohibited.


________________________________________
From: ceph-users [ceph-users-bounces@xxxxxxxxxxxxxx] on behalf of Wido den Hollander [wido@xxxxxxxx]
Sent: Tuesday, January 31, 2017 2:35 AM
To: Martin Palma; CEPH list
Subject: Re: mon.mon01 store is getting too big! 18119 MB >= 15360 MB -- 94% avail

> Op 31 januari 2017 om 10:22 schreef Martin Palma <martin@xxxxxxxx>:
>
>
> Hi all,
>
> our cluster is currently performing a big expansion and is in recovery
> mode (we doubled in size and osd# from 600 TB to 1,2 TB).
>

Yes, that is to be expected. When not all PGs are active+clean the MONs will not trim their datastore.

> Now we get the following message from our monitor nodes:
>
> mon.mon01 store is getting too big! 18119 MB >= 15360 MB -- 94% avail
>
> Reading [0] it says that it is normal in a state of active data
> rebalance and after it is finished it will be compacted.
>
> Should we wait until the recovery is finished or should we perform
> "ceph tell mon.{id} compact" now during recovery?
>

Mainly wait and make sure there is enough disk space. You can try a compact, but that can take the mon offline temp.

Just make sure you have enough diskspace :)

Wido

> Best,
> Martin
>
> [0] https://access.redhat.com/solutions/1982273
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux