> >>>>Yes, xio messenger which is implemented over Accelio can run over >>>>rdma >>>>transport (Infiniband, RoCE) and TCP. Please note that we have not >>>>enabled xio messenger / Accelio-tcp yet. >> >>Oh, ok, Great ! >> >> >>>>xio messenger is currently working with user mode clients. We only >>>>validated/tested user mode rbd client. >>>>Sandisk are working on krbd over kAccelio implementation. As of last >>>>week, SanDisk have basic I/Os working. >>>>Hopefully krbd/kAccelio will be available soon. >> >>Great, so we can expect even better results :) >> >> >>BTW, do you have some client side cpu usage benchmark ? >> >> >Sorry, I did not collect the cpu usage @client > I just rerun the test with 2 OSDs, 4 fio_clients on 4 separated pools/images, each client uses numjobs=1 for random write test and numjobs=8 for random read test. I also measured the cpu usage@client this time xio messenger: -------------------- . 4K random write: ~26k iops (10 cores used at OSD, 2 cores used at client) . 4K random read: ~332k iops (34 cores used at OSD, 4 cores used at client) . 256K random write: ~1464 MB/s (6 cores used at OSD, 2 cores used at client) . 256K random read: ~4300 MB/s (6 cores used at OSD, 5 cores used at client) simple: ---------- . 4K random write: ~25k iops (10 cores used at OSD, 3 cores used at client) . 4K random read: ~230k iops (38 cores used at OSD, 30 cores used at client) . 256K random write: ~1475 MB/s (6 cores used at OSD, 2 cores used at client) . 256K random read: ~4300 MB/s (7 cores used at OSD, 7 cores used at client) -vu ��.n��������+%������w��{.n����z��u���ܨ}���Ơz�j:+v�����w����ޙ��&�)ߡ�a����z�ޗ���ݢj��w�f