Strange 'ceph df' output

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Cant find out why this can happen:
Got an HEALTH_OK cluster. ceph version 0.87, all nodes are Debian Wheezy with a stable  kernel  3.2.65-1+deb7u1. ceph df shows me this:

$ ceph df
GLOBAL:
    SIZE     AVAIL     RAW USED     %RAW USED
    242T      221T        8519G          3.43
POOLS:
    NAME                  ID     USED      %USED     MAX AVAIL     OBJECTS
    rbd                   2      1948G      0.79        74902G      498856
    ec_backup-storage     4          0         0          146T           0
    cache                 5          0         0          184G           0
    block-devices         6       827G      0.33        74902G      211744

Explanation:

Total space = Used space + Available space:
242T < 8,5T + 221T, but MUST be equal is not it? Where I have lost aproxymately 12,5 Tb of space?

$ ceph -s
    cluster 0745bec9-a7a7-4ee1-be5d-bb12db3cdd8f
     health HEALTH_OK
     monmap e1: 3 mons at {node04=10.0.0.14:6789/0,node05=10.0.0.15:6789/0,node06=10.0.0.16:6789/0}, election epoch 48, quorum 0,1,2 node04,node05,node06
     osdmap e16866: 102 osds: 102 up, 102 in
      pgmap v570489: 10200 pgs, 4 pools, 2775 GB data, 693 kobjects
            8518 GB used, 221 TB / 242 TB avail
               10200 active+clean

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux