Re: troubleshooting ceph rdma performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Nov 7, 2018 at 10:52 PM Raju Rangoju <rajur@xxxxxxxxxxx> wrote:

Hello All,

 

I have been collecting performance numbers on our ceph cluster, and I had noticed a very poor throughput on ceph async+rdma when compared with tcp. I was wondering what tunings/settings should I do to the cluster that would improve the ceph rdma (async+rdma) performance.

 

Currently, from what we see: Ceph rdma throughput is less than half of the ceph tcp throughput (ran fio over iscsi mounted disks).

Our ceph cluster has 8 nodes and configured with two networks, cluster and client networks.

 

Can someone please shed some light.


Unfortunately the RDMA implementations are still fairly experimental and the community doesn't have much experience with them. I think the last I heard, the people developing that feature were planning to port it over to a different RDMA library (though that might be wrong/out of date) — it's not something I would consider a stable implementation. :/
-Greg
 

 

I’d be glad to provide any further information regarding the setup.

 

Thanks in Advance,

Raju

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux