Erasure coded calculation

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello everyone!

I'm trying to calculate the theoretical usable storage of a ceph cluster with erasure coded pools.

I have 8 nodes and the profile for all data pools will be k=6 m=2.
If every node has 6 x 1TB wouldn't the calculation be like this:
RAW capacity: 8Nodes x 6Disks x 1TB = 48TB
Loss to m=2: 48TB / 8Nodes x 2m = 12TB
EC capacity: 48TB - 12TB = 36TB

At the moment I have one cluster with 8 nodes and different disks than the sample (but every node has the same amount of disks and the same sized disks).
The output of ceph df detail is:
--- RAW STORAGE ---
CLASS  SIZE     AVAIL    USED     RAW USED  %RAW USED
hdd    109 TiB  103 TiB  5.8 TiB   5.9 TiB       5.41
TOTAL  109 TiB  103 TiB  5.8 TiB   5.9 TiB       5.41

--- POOLS ---
POOL                   ID  PGS  STORED   OBJECTS  %USED  MAX AVAIL
device_health_metrics   1    1   51 MiB       48      0     30 TiB
rep_data_fs             2   32   14 KiB    3.41k      0     30 TiB
rep_meta_fs             3   32  227 MiB    1.72k      0     30 TiB
ec_bkp1                4   32  4.2 TiB    1.10M   6.11     67 TiB

So ec_bkp1 uses 4.2TiB an there are 67TiB free usable Storage.
This means total EC usable storage would be 71.2TiB.
But calculating with the 109TiB RAW storage, shouldn't it be  81.75?
Are the 10TiB just some overhead (that would be much overhead) or is the calculation not correct?

And what If I want to expand the cluster in the first sample above by three nodes with 6 x 2TB, which means not the same sized disks as the others.
Will the calculation with the same EC profile still be the same?
RAW capacity: 8Nodes x 6Disks x 1TB + 3Nodes x 6Disks x 2TB = 84TB
Loss to m=2: 84TB / 11Nodes x 2m = 15.27TB
EC capacity: 84TB - 15.27TB = 68.72TB


Thanks in advance,
Simon
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux