Re: Pool Max Avail and Ceph Dashboard Pool Useage on Nautilus giving different percentages

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Does anyone know if this is also respecting an nearfull values?

Thank you in advice
Mehmet

Am 14. Januar 2020 15:20:39 MEZ schrieb Stephan Mueller <smueller@xxxxxxxx>:
Hi,
I sent out this message on the 19th of December and somehow it didn't
got into the list and I just noticed it now. Sorry for the delay.
I tried to resend it but it just returned the same error that mail was
not deliverable to the ceph mailing list. I will send the message
beneath as soon it's finally possible, but for now this should help you
out.

Stephan
Hi,

if "MAX AVAIL" displays the wrong data, the bug is just made more
visible through the dashboard, as the calculation is correct.

To get the right percentage you have to divide the used space through
the total, and the total can only consist of two states used and not
used space, so both states will be added together to get the total.

Or in short:

used / (avail + used)

Just looked into the C++ code - Max avail will be calculated the
following way:

avail_res = avail / raw_used_rate (
https://github.com/ceph/ceph/blob/nautilus/src/mon/PGMap.cc#L905)

raw_used_rate *= (sum.num_object_copies - sum.num_objects_degraded) /
sum.num_object_copies
(https://github.com/ceph/ceph/blob/nautilus/src/mon/PGMap.cc#L892)


Am Dienstag, den 17.12.2019, 07:07 +0100 schrieb ceph@xxxxxxxxxx:
I have observed this in the ceph nautilus dashboard too - and Think
it is a Display Bug... but sometimes it Shows tue right values


Which nautilus u use?


Am 10. Dezember 2019 14:31:05 MEZ schrieb "David Majchrzak, ODERLAND
Webbhotell AB" <david@xxxxxxxxxxx>:
Hi!

While browsing /#/pool in nautilus ceph dashboard I noticed it said
93%
used on the single pool we have (3x replica).

ceph df detal however shows 81% used on the pool and 67% raw
useage.

# ceph df detail
RAW STORAGE:
CLASS SIZE AVAIL USED RAW USED %RAW
USED
ssd 478 TiB 153 TiB 324 TiB 325
TiB 67.96
TOTAL 478 TiB 153 TiB 324 TiB 325
TiB 67.96

POOLS:
POOL ID STORED OBJECTS USED %USED

MAX AVAIL QUOTA OBJECTS QUOTA BYTES DIRTY USED
COMPR UNDER COMPR
echo 3 108 TiB 29.49M 324
TiB 81.61 24
TiB N/A N/A 29.49M 0
B 0 B

I manually calculated the used percentage to get "avail" in your case
it seems to be 73 TiB. That means the the total space available for
your pool would be 397 TiB.
I'm not sure why that is, but it's what the math behind those
calculations say.
(Found a thread regarding that on the new mailing list (ceph-
users@xxxxxxx) ->


https://lists.ceph.io/hyperkitty/list/ceph-users@xxxxxxx/thread/NH2LMMX5KVRWCURI3BARRUAETKE2T2QN/#JDHXOQKWF6NZLQMOGEPAQCLI44KB54A3
)

0.8161 = used (324) / total => total = 397

Than I looked at the remaining calculations:

raw_used_rate *= (sum.num_object_copies - sum.num_objects_degraded) /
sum.num_object_copies

and

avail_res = avail / raw_used_rate

First I looked up the init value for "raw_used_rate" for replicated
pools. It's their size so we can put in 3 here and for "avail_res" is
24.

So I first calculated the final "raw_used_rate" which is 3.042. That
means that you have around 4.2% degraded pg's in your pool.



I know we're looking at the most full OSD (210PGs, 79% used, 1.17
VAR)
and count max avail from that. But where's the 93% full from in
dashboard?

As said above the calculation is right but the data is wrong... As it
uses the real data that can be put in the selected pool, but it uses
everywhere else sizes that consider all pool replicas.

I created an issue to fix this https://tracker.ceph.com/issues/43384


My guess is that is comes from calculating:

1 - Max Avail / (Used + Max avail) = 0.93


Kind Regards,

David Majchrzak
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Hope I could clarify some things and thanks for your feedback :)

BTW this problem currently still exists as there wasn't any change to
these mentioned lines after the nautilus release.

Stephan
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux