On 10/24/2013 09:08 AM, Nathan Stratton wrote:
I have tried to make Gluster FS work for last 2 years on different projects and have given up. With Gluster I have always used 10 gig Infiniband. Its dirt cheap (about $80 a port used including switch) and very low latency, however ceph does not support it so we are looking at ethernet.
Ceph does work with IPoIB, We've got some people working on rsocket support, and Mellanox just opensourced VMA, so there are some options on the infiniband side if you want to go that route. With QDR and IPoIB we have been able to push about 2.4GB/s per node. No idea how SDR would do though.
I know that 10GBase-T has more delay then SFP+ with direct attached cables (.3 usec vs 2.6 usec per link), but does that matter? Some sites stay it is a huge hit, but we are talking usec, not ms, so I find it hard to believe that it causes that much of an issue. I like the lower cost and use of standard cabling vs SFP+, but I don't want to sacrifice on performance.
Honestly I wouldn't worry about it too much. We have bigger latency dragons to slay. :)
Our plan is to use our KVM hosts for ceph, the hardware we are looking at how is: SPF+ Option - Supermicro X9DRW-7TPF+ (Intel 82599) 10GBase-T Option - Supermicro X9DRW-3TF+ (Intel x540) 2 - 2.9 GHz Xeon 2690 v2 16 - 8 Gig 1877 MHz dual rank DDR3 9 - Samsung 840 EVO 120 GB SSD (1 root 8 ceph)
Just FYI, we haven't done a whole lot of optimization work on SSDs yet, so if you are shooting for really high IOPS be prepared as its still kind of wild west. :) We've got a couple of people working on different projects that we hope will help here, but there's a lot of tuning work to be done still. :)
Switch is going to be Arista 7050-T for SFP+ or Arista 7050-S for 10GBase-T.
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com