Re: ceph df : negative numbers

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I've not been able to reproduce an issue with exactly same version of
your cluster.

./ceph -v
ceph version 10.2.5 (c461ee19ecbc0c5c330aca20f7392c9a00730367)

./rados df | grep cephfs
cephfs_data_a         409600          100            0            0
        0            0            0          100       409600

./rados -p cephfs_data_a ls | wc -l
100

If you could reproduce an issue and let us share procedure, that would
be definitely help.

Will try again.

On Tue, Feb 7, 2017 at 2:01 AM, Florent B <florent@xxxxxxxxxxx> wrote:
> On 02/06/2017 05:49 PM, Shinobu Kinjo wrote:
>> How about *pve01-rbd01*?
>>
>>  * rados -p pve01-rbd01 ls | wc -l
>>
>> ?
>
> # rados -p pve01-rbd01 ls | wc -l
> 871
>
> # ceph df
> GLOBAL:
>     SIZE      AVAIL     RAW USED     %RAW USED
>     5173G     5146G       27251M          0.51
> POOLS:
>     NAME            ID     USED       %USED     MAX AVAIL     OBJECTS
>     data            0           0         0         2985G           0
>     metadata        1      59178k         0         2985G         114
>     pve01-rbd01     5       2572M      0.08         2985G         852
>     cephfs01        6      39059k         0         2985G         120

Can you do just `df -h` on your box mounting cephfs or `ceph -s` on
your one of the MON hosts?

>
>
> And I saw there's a huge difference between data "used" and "raw used" :
> 27251M is not the sum of all pools (including copies)
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux