Ceph storage distribution between pools

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I have a small cluster with a single crash map.  I use 3 pools one (Openebula VMs on rbd), cephfs_data and cephfs_metadata for cephfs.  Here is my ceph df

RAW STORAGE:
    CLASS     SIZE       AVAIL      USED       RAW USED     %RAW USED 
    ssd       94 TiB     78 TiB     17 TiB       17 TiB         17.75 
    TOTAL     94 TiB     78 TiB     17 TiB       17 TiB         17.75 
 
POOLS:
    POOL                ID     STORED      OBJECTS     USED        %USED     MAX AVAIL 
    cephfs_data          1     3.3 TiB       6.62M      10 TiB     12.36        24 TiB 
    cephfs_metadata      2     2.1 GiB     447.63k     2.5 GiB         0        24 TiB 
    one                  5     2.2 TiB     598.12k     6.6 TiB      8.42        24 TiB 

What confuses me is an even distribution of MAX_AVAIL storage between those pools.  When I mount cephfs on a client host, df -h shows me pool utilization.
28T  3.4T   24T  13%

I also have an old hammer cluster where I see a similar picture for ceph df for a one crash map (covering rbd, cephfs-data, cephfs-metadata)

GLOBAL:
    SIZE       AVAIL      RAW USED     %RAW USED 
    87053G     31306G       55747G         64.04 
POOLS:
    NAME            ID     USED       %USED     MAX AVAIL     OBJECTS 
    rbd             0      12907G     69.37         5700G     3312474 
    cephfs-data     12      2873G     33.52         5700G     5859947 
    cephfs-meta     13     90035k         0         5700G      443961 
    cloud12g        14      2857G     43.41         3726G      623737 

However, df -h on clients show total cluster utilization

 86T   55T   31T  65%
 
It seems that the hammer dynamically changes allocation data between the pools on the same crush map as needed. Does nautilus do the same?  In this case, does 24 TB means actually avaialble space divided 3 (all my pools are set with 3/2 replication)?  

Thank you and sorry for the confusion
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux