Please do keep in mind that this is *very* experimental still and likely
to destroy all data and life within a 2 mile radius. ;)
Mark
On 04/08/2015 01:16 PM, Andrei Mikhailovsky wrote:
Somnath,
Sounds very promising! I can't wait to try it on my cluster as I am
currently using IPOIB instread of the native rdma.
Cheers
Andrei
------------------------------------------------------------------------
*From: *"Somnath Roy" <Somnath.Roy@xxxxxxxxxxx>
*To: *"Andrei Mikhailovsky" <andrei@xxxxxxxxxx>, "Andrey Korolyov"
<andrey@xxxxxxx>
*Cc: *ceph-users@xxxxxxxxxxxxxx, "ceph-devel"
<ceph-devel@xxxxxxxxxxxxxxx>
*Sent: *Wednesday, 8 April, 2015 5:23:23 PM
*Subject: *RE: Preliminary RDMA vs TCP numbers
Andrei,
Yes, I see it has lot of potential and I believe fixing the
performance bottlenecks inside XIO messenger it should go further.
We are working on it and will keep community posted..
Thanks & Regards
Somnath
*From:*Andrei Mikhailovsky [mailto:andrei@xxxxxxxxxx]
*Sent:* Wednesday, April 08, 2015 2:22 AM
*To:* Andrey Korolyov
*Cc:* ceph-users@xxxxxxxxxxxxxx; ceph-devel; Somnath Roy
*Subject:* Re: Preliminary RDMA vs TCP numbers
Hi,
Am I the only person noticing disappointing results from the
preliminary RDMA testing, or am I reading the numbers wrong?
Yes, it's true that on a very small cluster you do see a great
improvement in rdma, but in real life rdma is used in large
infrastructure projects, not on a few servers with a handful of
osds. In fact, from what i've seen from the slides, the rdma
implementation scales horribly to the point that it becomes slower
the more osds you through at it.
From my limited knowledge, i have expected a much higher
performance gains with rdma, taking into account that you should
have much lower latency and overhead and lower cpu utilisation when
using this transport in comparison with tcp.
Are we likely to see a great deal of improvement with ceph and rdma
in a near future? Is there a roadmap for having a stable and
reliable rdma protocol support?
Thanks
Andrei
------------------------------------------------------------------------
*From: *"Andrey Korolyov" <andrey@xxxxxxx <mailto:andrey@xxxxxxx>>
*To: *"Somnath Roy" <Somnath.Roy@xxxxxxxxxxx
<mailto:Somnath.Roy@xxxxxxxxxxx>>
*Cc: *ceph-users@xxxxxxxxxxxxxx
<mailto:ceph-users@xxxxxxxxxxxxxx>, "ceph-devel"
<ceph-devel@xxxxxxxxxxxxxxx <mailto:ceph-devel@xxxxxxxxxxxxxxx>>
*Sent: *Wednesday, 8 April, 2015 9:28:12 AM
*Subject: *Re: Preliminary RDMA vs TCP numbers
On Wed, Apr 8, 2015 at 11:17 AM, Somnath Roy
<Somnath.Roy@xxxxxxxxxxx <mailto:Somnath.Roy@xxxxxxxxxxx>> wrote:
>
> Hi,
> Please find the preliminary performance numbers of TCP Vs RDMA (XIO) implementation (on top of SSDs) in the following link.
>
>http://www.slideshare.net/somnathroy7568/ceph-on-rdma
>
> The attachment didn't go through it seems, so, I had to use slideshare.
>
> Mark,
> If we have time, I can present it in tomorrow's performance meeting.
>
> Thanks & Regards
> Somnath
>
Those numbers are really impressive (for small numbers at
least)! What
are TCP settings you using?For example, difference can be lowered on
scale due to less intensive per-connection acceleration on CUBIC
on a
larger number of nodes, though I do not believe that it was a main
reason for an observed TCP catchup on a relatively flat workload
such
as fio generates.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx <mailto:ceph-users@xxxxxxxxxxxxxx>
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
------------------------------------------------------------------------
PLEASE NOTE: The information contained in this electronic mail
message is intended only for the use of the designated recipient(s)
named above. If the reader of this message is not the intended
recipient, you are hereby notified that you have received this
message in error and that any review, dissemination, distribution,
or copying of this message is strictly prohibited. If you have
received this communication in error, please notify the sender by
telephone or e-mail (as shown above) immediately and destroy any and
all copies of this message in your possession (whether hard copies
or electronically stored copies).
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com