Ceph uses more raw space than expected

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi all,

I'm running a 6 node 24 OSD cluster, Jewel 10.2.5 with kernel 4.8.

I put about 1TB of data in the cluster, with all pools having size 3. Yet about 5TB of raw disk is used as opposed to the expected 3TB.

result of ceph -s:

      pgmap v1057361: 2400 pgs, 3 pools, 984 GB data, 125 Mobjects
            5039 GB used, 12353 GB / 17393 GB avail
                2398 active+clean
                   2 active+clean+scrubbing

Result of ceph df:

GLOBAL:
    SIZE       AVAIL      RAW USED     %RAW USED 
    17393G     12353G        5039G         28.97 
POOLS:
    NAME                ID     USED       %USED     MAX AVAIL     OBJECTS   
    rbd                 1        801G     17.12         3880G        206299 
    cephfs_data         2        182G      4.49         3880G     130875410 
    cephfs_metadata     3      32540k         0         3880G        201555

Result of ceph osd dump:

pool 1 'rbd' replicated size 3 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 800 pgp_num 800 last_change 482 flags hashpspool stripe_width 0
pool 2 'cephfs_data' replicated size 3 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 800 pgp_num 800 last_change 410 flags hashpspool crash_replay_interval 45 stripe_width 0
pool 3 'cephfs_metadata' replicated size 3 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 800 pgp_num 800 last_change 408 flags hashpspool stripe_width 0

The cluster was set up with ceph-deploy. So each OSD drive is formatted with xfs. One more thing I should mention is that I'm using the experimental directory fragmentation feature in cephfs. 
After scouring the mailing list I've found this other post that seems to be related or the same issue: http://lists.ceph.com/pipermail/ceph-users-ceph.com/2013-October/004901.html

Does anyone know if this is a bug or is it legitimate overhead that I failed to account for?

Thanks,
Pavel
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux