Re: Checking the current full and nearfull ratio

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, 4 May 2017, Adam Carheden wrote:
> How do I check the full ratio and nearfull ratio of a running cluster?
> 
> I know i can set 'mon osd full ratio' and 'mon osd nearfull ratio' in
> the [global] setting of ceph.conf. But things work fine without those
> lines (uses defaults, obviously).
> 
> They can also be changed with `ceph tell mon.* injectargs
> "--mon_osd_full_ratio .##` and `ceph tell mon.* injectargs
> "--mon_osd_nearfull_ratio .##`, in which case the running cluster's
> notion of full/nearfull wouldn't match ceph.conf.

Sort of.. those configs set the initial values, but the ones that are 
applied are actually in PGMap.  Look at 'ceph pg dump | head' and adjust 
the values with 'ceph pg set_full_ratio' and 'ceph pg set_nearfull_ratio'.

Note that this is improved and cleaned up in luminous (the commands swithc 
to 'ceph osd set-[near]full-ratio' and the values move into the OSDMap, 
aong with the other full configurables (failsafe ratio, and ratio at which 
backfill is stopped).
 
> How do I have monitors report the values they're currently running with?
> (i.e. is there something like `ceph tell mon.* dumpargs...`?)
> 
> It seems like this should be a pretty basic question, but my Googlefoo
> is failing me this morning.
> 
> For those who find this post and want to check how full their OSDs are
> rather than checking the full/nearfull limits, `ceph osd df tree` seems
> to be the hot ticket.
> 
> 
> And as long as I'm posting, I may as well get my next question out of
> the way. My minimally used 4-node, 16 OSD test cluster looks like this:
> # ceph osd df tree
> ....
> MIN/MAX VAR: 0.75/1.31  STDDEV: 0.84
> 
> When should one be concerned about imbalance? What values for
> min/max/stddev represent problems where reweighing an OSD (or other
> action) is What sort of advisable? Is that the purpose of nearfull or
> does one need to monitor individual OSDs too?

You can use 'osd reweight-by-utilization' to reduce the variance.

sage
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux