Re: RDMA

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 04/18/2013 04:15 PM, Gandalf Corvotempesta wrote:
2013/4/18 Mark Nelson <mark.nelson@xxxxxxxxxxx>:
10GbE is fully supported and widely used with Ceph while IB is a bit more
complicated with fewer users.  Having said that, IPoIB seems to work just
fine, and there is potential in the future for even better performance.
Which one is right for you probably depends on the existing network
infrastructure you are using, how fast your OSD nodes are, and what you are
trying to do.  Sadly there is no easy answer. :)

QDR switches are sold (refurbished) at more or less € 2k, 40Gb/s and
usually 36ports.
10GbE costs at least 2x times more and with only 12 or 24 ports.

2GB/s on QDR cards is good and is still faster than 10GbE but still
half than what I would expect from a QDR card. Do you know why we
loose more than 50% of bandwidth?

Well, even with RDMA you probably aren't going to get much more than ~3.2GB/s (or at least that's what I saw on our production clusters at my last job). There's encoding overhead so you can't get the full 40Gb/s.

Beyond that, it's just another software layer with the associated inefficiencies. Frankly I'm kind of amazed that rsockets can supposedly get around 3GB/s. That's impressive performance imho.


Do you have experience on DDR cards?


Yes, but not in conjunction with modern IPoIB. I'm not sure how they would perform these days. I imagine probably better than 10GbE, but I don't know by how much.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com





[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux