Re: Ceph Stretch Cluster - df pool size (Max Avail)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Kilian,

we do not currently use this mode of ceph clustering. but normally you need to assign the crush rule to the pool as well, otherwise it will take rule 0 as default.

the following calculation for rule 0 would also work approximately:

(3 Nodes *6 x SSD *1,8TB)/4 = 8,1 TB

hope it helps, Joachim


___________________________________
Clyso GmbH - Ceph Foundation Member

Am 22.06.22 um 18:09 schrieb Kilian Ries:
Hi,


i'm running a ceph stretch cluster with two datacenters. Each of the datacenters has 3x OSD nodes (in total 6x) and 2x monitors. A third monitor is deployed as arbiter node in a third datacenter.


Each OSD node has 6x SSDs with 1,8 TB storage - that gives me a total of about 63 TB storage (6x nodes * 6x SSD * 1,8TB = 63TB)c.


In stretch mode my pool is configured with replication 4x - and as far as i unterstand this should give me a max pool storage size of ~15TB (63TB / 4 = 15,75TB. But if i run "ceph df" it shows me only the half, about ~7,5TB.



$ ceph df

--- RAW STORAGE ---

CLASS    SIZE   AVAIL    USED  RAW USED  %RAW USED

ssd    63 TiB  63 TiB  35 GiB    35 GiB       0.05

TOTAL  63 TiB  63 TiB  35 GiB    35 GiB       0.05



--- POOLS ---

POOL                   ID  PGS   STORED  OBJECTS    USED  %USED  MAX AVAIL

device_health_metrics   1    4  4.4 MiB       36  17 MiB      0    7.5 TiB

vm_stretch              2   64  8.2 GiB    2.19k  33 GiB   0.11    7.5 TiB



My replication rule is from the ceph documentation:


https://docs.ceph.com/en/latest/rados/operations/stretch-mode/


rule stretch_rule {

         id 1

         min_size 1

         max_size 10

         type replicated

         step take site1

         step chooseleaf firstn 2 type host

         step emit

         step take site2

         step chooseleaf firstn 2 type host

         step emit

}



Any idea why ceph shows me only about half the size i should be able to use (with 4x replicaton on the pool) ?


Thanks,

Regards


Kilian
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux