Ceph and Infiniband

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, 22 Jul 2014, Riccardo Murri wrote:
> Hello,
> 
> a few questions on Ceph's current support for Infiniband
> 
> (A) Can Ceph use Infiniband's native protocol stack, or must it use
> IP-over-IB?  Google finds a couple of entries in the Ceph wiki related
> to native IB support (see [1], [2]), but none of them seems finished
> and there is no timeline.
> 
> [1]: https://wiki.ceph.com/Planning/Blueprints/Emperor/msgr%3A_implement_infiniband_support_via_rsockets
> [2]: http://wiki.ceph.com/Planning/Blueprints/Giant/Accelio_RDMA_Messenger

This is work in progress.  We hope to get basic support into the tree 
in the next couple of months.

> (B) Can we connect to the same Ceph cluster from Infiniband *and*
> Ethernet?  Some clients do only have Ethernet and will not be
> upgraded, some others would have QDR Infiniband -- we would like both
> sets to access the same storage cluster.

This is further out.  Very early refactoring to make this work in 
wip-addr.

> (C) I found this old thread about Ceph's performance on 10GbE and
> Infiniband: are the issues reported there still current?
> 
> http://comments.gmane.org/gmane.comp.file-systems.ceph.devel/6816

No idea!  :)

sage

> 
> 
> Thanks for any hint!
> 
> Riccardo
> 
> --
> Riccardo Murri
> http://www.s3it.uzh.ch/about/team/
> 
> S3IT: Services and Support for Science IT
> University of Zurich
> Winterthurerstrasse 190, CH-8057 Z?rich (Switzerland)
> Tel: +41 44 635 4222
> Fax: +41 44 635 6888
> _______________________________________________
> ceph-users mailing list
> ceph-users at lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux