Checking the current full and nearfull ratio

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



How do I check the full ratio and nearfull ratio of a running cluster?

I know i can set 'mon osd full ratio' and 'mon osd nearfull ratio' in
the [global] setting of ceph.conf. But things work fine without those
lines (uses defaults, obviously).

They can also be changed with `ceph tell mon.* injectargs
"--mon_osd_full_ratio .##` and `ceph tell mon.* injectargs
"--mon_osd_nearfull_ratio .##`, in which case the running cluster's
notion of full/nearfull wouldn't match ceph.conf.

How do I have monitors report the values they're currently running with?
(i.e. is there something like `ceph tell mon.* dumpargs...`?)

It seems like this should be a pretty basic question, but my Googlefoo
is failing me this morning.

For those who find this post and want to check how full their OSDs are
rather than checking the full/nearfull limits, `ceph osd df tree` seems
to be the hot ticket.


And as long as I'm posting, I may as well get my next question out of
the way. My minimally used 4-node, 16 OSD test cluster looks like this:
# ceph osd df tree
....
MIN/MAX VAR: 0.75/1.31  STDDEV: 0.84

When should one be concerned about imbalance? What values for
min/max/stddev represent problems where reweighing an OSD (or other
action) is What sort of advisable? Is that the purpose of nearfull or
does one need to monitor individual OSDs too?


-- 
Adam Carheden

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux