Hi Yordan,
this is mimic documentation and these snippets aren't valid for
Nautilus any more. They are still present in Nautilus pages
though..
Going to create a corresponding ticket to fix that.
Relevant Nautilus changes for 'ceph df [detail]' command can be
found in Nautilus release notes: https://docs.ceph.com/docs/nautilus/releases/nautilus/
In short - USED field accounts for all the overhead data
including replicas etc. It's STORED field which now represents
pure data user put into a pool.
Thanks,
Igor
On 10/2/2019 8:33 AM, Yordan Yordanov
(Innologica) wrote:
The documentation states:
The POOLS section
of the output provides a list of pools and the notional
usage of each pool. The output from this section DOES
NOT reflect replicas, clones or snapshots. For example,
if you store an object with 1MB of data, the notional
usage will be 1MB, but the actual usage may be 2MB or more
depending on the number of replicas, clones and snapshots.
However in our case we are clearly seeing the USAGE
field multiplying the total object sizes to the number of
replicas.
[root@blackmirror
~]# ceph df
RAW STORAGE:
CLASS SIZE
AVAIL USED RAW USED %RAW USED
hdd 80
TiB 34 TiB
46 TiB 46 TiB 58.10
TOTAL 80
TiB 34 TiB 46 TiB 46 TiB 58.10
POOLS:
POOL ID
STORED OBJECTS USED %USED MAX AVAIL
one 2
15 TiB 4.05M 46 TiB 68.32
7.2 TiB
bench 5
250 MiB 67 250 MiB 0 22 TiB
[root@blackmirror
~]# rbd du -p one
NAME
PROVISIONED USED
...
<TOTAL>
20 TiB 15 TiB
This is causing several apps (including
ceph dashboard) to display inaccurate percentages, because
they calculate the total pool capacity as USED + MAX AVAIL,
which in this case yields 53.2TB, which is way off. 7.2TB is
about 13% of that, so we receive alarms and this is bugging us
for quite some time now.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
|
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx