Re: RDMA performance comparison: IBNBD, SCST, NVMEoF

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello Bart,

On Tue, Apr 18, 2017 at 8:22 PM, Bart Van Assche
<Bart.VanAssche@xxxxxxxxxxx> wrote:
> On Tue, 2017-04-18 at 19:33 +0200, Roman Penyaev wrote:
>> By current email I would like to share some fresh RDMA performance
>> results of IBNBD, SCST and NVMEof, based on 4.10 kernel and variety
>> of configurations.
>
> Hello Roman,
>
> Thank you for having shared these results. But please do not expect me
> to have another look at IBNBD before the design bugs in the driver and
> also in the protocol get fixed.

I expected only that you might find results interesting, where I target
the following:

    1) retest on latest kernel
    2) compare against NVMEoF
    3) retest using register_always=N


> The presentation during Vault 2017 made
> it clear that the driver does not scale if more than two CPUs submit I/O
> simultaneously at the initiator side.

On the iops graph, where I increase number of simultaneous fio jobs up to 128
(initiator has 64 CPUs), NVMEoF tends to repeat the same curve, staying always
below IBNBD.  So even this is a scalability problem, it can be seen on NVMEoF
runs also. That's why I posted these results to draw someone's attention.


--
Roman



[Index of Archives]     [Linux RAID]     [Linux SCSI]     [Linux ATA RAID]     [IDE]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Device Mapper]

  Powered by Linux