Re: xio messenger prelim benchmark

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 2/6/2015 10:41:15 AM, "Vu Pham" <vuhuong@xxxxxxxxxxxx> wrote:

>
>>>>Yes, xio messenger which is implemented over Accelio can run over
>>>>rdma
>>>>transport (Infiniband, RoCE) and TCP. Please note that we have not
>>>>enabled xio messenger / Accelio-tcp yet.
>>
>>Oh, ok, Great !
>>
>>
>>>>xio messenger is currently working with user mode clients. We only
>>>>validated/tested user mode rbd client.
>>>>Sandisk are working on krbd over kAccelio implementation. As of last
>>>>week, SanDisk have basic I/Os working.
>>>>Hopefully krbd/kAccelio will be available soon.
>>
>>Great, so we can expect even better results :)
>>
>>
>>BTW, do you have some client side cpu usage benchmark ?
>>
>>
>Sorry, I did not collect the cpu usage @client
>
>BTW since you will use Mellanox sx1012 EN switch for your benchmark, 
>you
>can take advantage of 56Gbe/s over 40Gbe/s with a license upgrade.
>
>With my minimum setup of single node cluster (single OSD, 2
>pools/images) + 1 client node with 16 fio_rbd client streams, we 
>already
>see the benefit of big pipe.
>
>              ib_send_bw 64k read (ceph) 256k read(ceph)
>---------------------------------------------------------------------------------------
>40Gbe: 4350 MB/s 4300 MB/s 4350 MB/s
>56Gbe: 5100 MB/s 4850 MB/s 5000 MB/s
>

On raw data (ib_send_bw), I get only 5100 MB/s on 56Gbe/s link because 
I'm using default mtu=1500.
There is RoCE header overhead per frame and which RoCE's mtu is 
selected/used (RoCE's mtu is 512-bytes in this case)

With jumbo frames (RoCE's mtu can be 2K or 4K), the raw bw can get to 
~50Gb/s

-vu

��.n��������+%������w��{.n����z��u���ܨ}���Ơz�j:+v�����w����ޙ��&�)ߡ�a����z�ޗ���ݢj��w�f





[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux