Re: Inconsistency in 'ceph df' stats

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



This comes up periodically on the mailing list; see eg
http://www.spinics.net/lists/ceph-users/msg15907.html

I'm not sure if your case fits within those odd parameters or not, but
I bet it does. :)
-Greg

On Mon, Aug 31, 2015 at 8:16 PM, Stillwell, Bryan
<bryan.stillwell@xxxxxxxxxxx> wrote:
> On one of our staging ceph clusters (firefly 0.80.10) I've noticed that
> some
> of the statistics in the 'ceph df' output don't seem to match up.  For
> example
> in the output below the amount of raw used is 8,402G, which with triple
> replication would be 2,800.7G used (all the pools are triple replication).
> However, if you add up the numbers used by all the pools (424G + 2538G +
> 103G)
> you get 3,065G used (a difference of +264.3G).
>
> GLOBAL:
>     SIZE       AVAIL      RAW USED     %RAW USED
>     50275G     41873G        8402G         16.71
> POOLS:
>     NAME              ID     USED      %USED     MAX AVAIL     OBJECTS
>     data              0          0         0        13559G           0
>     metadata          1          0         0        13559G           0
>     rbd               2          0         0        13559G           0
>     volumes           3       424G      0.84        13559G      159651
>     images            4      2538G      5.05        13559G      325198
>     backups           5          0         0        13559G           0
>     instances         6       103G      0.21        13559G       25310
>
> The max avail amount doesn't line up either.  If you take 3 * 13,559G you
> get
> 40,677G available, but the global stat is 41,873G (a difference of 1,196G).
>
>
> On another staging cluster the numbers are closer to what I would expect.
> The
> amount of raw used is 7,037G, which with triple replication should be
> 2,345.7G.  However, adding up the amounts used by all the pools (102G +
> 1749G
> + 478G + 14G) is 2,343G (a difference of just -2.7G).
>
> GLOBAL:
>     SIZE       AVAIL      RAW USED     %RAW USED
>     50275G     43238G        7037G         14.00
> POOLS:
>     NAME              ID     USED       %USED     MAX AVAIL     OBJECTS
>     data              0           0         0        13657G           0
>     metadata          1           0         0        13657G           0
>     rbd               2           0         0        13657G           0
>     volumes           3        102G      0.20        13657G       27215
>     images            4       1749G      3.48        13657G      224259
>     instances         5        478G      0.95        13657G       79221
>     backups           6           0         0        13657G           0
>     scbench           8      14704M      0.03        13657G        3677
>
> The max avail is a little further off.  Taking 3 * 13,657G you get 40,971G,
> but the global stat is 43,238G (a difference of 2,267G).
>
> My guess would have been that the global numbers would include some of the
> overhead involved which lines up with the second cluster, but the first
> cluster would have -264.3G of overhead which just doesn't make sense.  Any
> ideas where these stats might be getting off?
>
> Thanks,
> Bryan
>
>
> ________________________________
>
> This E-mail and any of its attachments may contain Time Warner Cable proprietary information, which is privileged, confidential, or subject to copyright belonging to Time Warner Cable. This E-mail is intended solely for the use of the individual or entity to which it is addressed. If you are not the intended recipient of this E-mail, you are hereby notified that any dissemination, distribution, copying, or action taken in relation to the contents of and attachments to this E-mail is strictly prohibited and may be unlawful. If you have received this E-mail in error, please notify the sender immediately and permanently delete the original and any copy of this E-mail and any printout.
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux