Re: Optimal Erasure Code profile?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

since you can't change a pool's EC profile afterwards you have to choose a reasonable number of chunks. If you need to start with those 6 hosts I would also recommend to span the EC profile across all those nodes, but keep in mind that the cluster won't be able to recover if a host fails. But this will be possible of course if you expand your cluster. Which profile exactly to choose for the 6 chunks is depending on your resiliency requirements. How many host failures do you need/want to sustain? 4:2 is a reasonable profile, your clients would not notice one host failure but the IO would pause if a second host fails (because default min_size is k + 1), but data loss would still be prevented.

Regards,
Eugen


Zitat von Zakhar Kirpichenko <zakhar@xxxxxxxxx>:

Hi!

I've got a CEPH 16.2.6 cluster, the hardware is 6 x Supermicro SSG-6029P
nodes, each equipped with:

2 x Intel(R) Xeon(R) Gold 5220R CPUs
384 GB RAM
2 x boot drives
2 x 1.6 TB enterprise NVME drives (DB/WAL)
2 x 6.4 TB enterprise drives (storage tier)
9 x 9TB HDDs (storage tier)
2 x Intel XL710 NICs connected to a pair of 40/100GE switches

Please help me understand the calculation / choice of the optimal EC
profile for this setup. I would like the EC pool to span all 6 nodes on HDD
only and have the optimal combination of resiliency and efficiency with the
view that the cluster will expand. Previously when I had only 3 nodes I
tested EC with:

crush-device-class=hdd
crush-failure-domain=host
crush-root=default
jerasure-per-chunk-alignment=false
k=2
m=1
plugin=jerasure
technique=reed_sol_van
w=8

I am leaning towards using the above profile with k=4,m=2 for "production"
use, but am not sure that I understand the math correctly, that this
profile is optimal for my current setup, and that I'll be able to scale it
properly by adding new nodes. I would very much appreciate any advice!

Best regards,
Zakhar
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux