Sorry for bombarding you with questions I am just curious as to where the 40% performance comes from.
On 06/19/2015 11:05 AM, Lincoln Bryant
wrote:
Hi Sean,
We have ~1PB of EC storage using Dell R730xd servers with 6TB OSDs. We've got our erasure coding profile set up to be k=10,m=3 which gives us a very reasonable chunk of the raw storage with nice resiliency.
I found that CPU usage was significantly higher in EC, but not so much as to be problematic. Additionally, EC performance was about 40% of replicated pool performance in our testing.Â
With 36-disk servers you'll probably need to make sure you do the usual kernel tweaks like increasing the max number of file descriptors, etc.Â
Cheers,Lincoln
On Jun 19, 2015, at 10:36 AM, Sean wrote:
_______________________________________________I am looking to use Ceph using EC on a few leftover storage servers (36 disk supermicro servers with dual xeon sockets and around 256Gb of ram). I did a small test using one node and using the ISA library and noticed that the CPU load was pretty spikey for just normal operation.
Does anyone have any experience running Ceph EC on around 216 to 270 4TB disks? I'm looking  to yield around 680 TB to 1PB if possible. just putting my feelers out there to see if anyone else has had any experience and looking for any guidance.
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com