>>>Yes, xio messenger which is implemented over Accelio can run over >>>rdma >>>transport (Infiniband, RoCE) and TCP. Please note that we have not >>>enabled xio messenger / Accelio-tcp yet. > >Oh, ok, Great ! > > >>>xio messenger is currently working with user mode clients. We only >>>validated/tested user mode rbd client. >>>Sandisk are working on krbd over kAccelio implementation. As of last >>>week, SanDisk have basic I/Os working. >>>Hopefully krbd/kAccelio will be available soon. > >Great, so we can expect even better results :) > > >BTW, do you have some client side cpu usage benchmark ? > > Sorry, I did not collect the cpu usage @client BTW since you will use Mellanox sx1012 EN switch for your benchmark, you can take advantage of 56Gbe/s over 40Gbe/s with a license upgrade. With my minimum setup of single node cluster (single OSD, 2 pools/images) + 1 client node with 16 fio_rbd client streams, we already see the benefit of big pipe. ib_send_bw 64k read (ceph) 256k read(ceph) --------------------------------------------------------------------------------------- 40Gbe: 4350 MB/s 4300 MB/s 4350 MB/s 56Gbe: 5100 MB/s 4850 MB/s 5000 MB/s -vu��.n��������+%������w��{.n����z��u���ܨ}���Ơz�j:+v�����w����ޙ��&�)ߡ�a����z�ޗ���ݢj��w�f