Re: RDMA performance comparison: IBNBD, SCST, NVMEoF

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello Bart,

On Tue, Apr 18, 2017 at 8:22 PM, Bart Van Assche
<Bart.VanAssche@xxxxxxxxxxx> wrote:
> On Tue, 2017-04-18 at 19:33 +0200, Roman Penyaev wrote:
>> By current email I would like to share some fresh RDMA performance
>> results of IBNBD, SCST and NVMEof, based on 4.10 kernel and variety
>> of configurations.
>
> Hello Roman,
>
> Thank you for having shared these results. But please do not expect me
> to have another look at IBNBD before the design bugs in the driver and
> also in the protocol get fixed.

I expected only that you might find results interesting, where I target
the following:

    1) retest on latest kernel
    2) compare against NVMEoF
    3) retest using register_always=N


> The presentation during Vault 2017 made
> it clear that the driver does not scale if more than two CPUs submit I/O
> simultaneously at the initiator side.

On the iops graph, where I increase number of simultaneous fio jobs up to 128
(initiator has 64 CPUs), NVMEoF tends to repeat the same curve, staying always
below IBNBD.  So even this is a scalability problem, it can be seen on NVMEoF
runs also. That's why I posted these results to draw someone's attention.


--
Roman
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Photo]     [Yosemite News]     [Yosemite Photos]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux