Re: RDMA performance comparison: IBNBD, SCST, NVMEoF

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, 2017-04-18 at 19:33 +0200, Roman Penyaev wrote:
> By current email I would like to share some fresh RDMA performance
> results of IBNBD, SCST and NVMEof, based on 4.10 kernel and variety
> of configurations.

Hello Roman,

Thank you for having shared these results. But please do not expect me
to have another look at IBNBD before the design bugs in the driver and
also in the protocol get fixed. The presentation during Vault 2017 made
it clear that the driver does not scale if more than two CPUs submit I/O
simultaneously at the initiator side. The comments Sagi posted should be
addressed but I haven't seen any progress from the IBNBD authors with
regard to these comments ...

See also:
* Danil Kipnis, Infiniband Network Block Device (IBNBD), Vault 2017
(https://vault2017.sched.com/event/9Xjw/infiniband-network-block-device-ibnbd-danil-kipnis-profitbricks-gmbh).
* Sagi Grimberg, Re: [RFC PATCH 00/28] INFINIBAND NETWORK BLOCK
DEVICE (IBNBD), March 27th, 2017
(https://www.spinics.net/lists/linux-rdma/msg47879.html).

Best regards,

Bart.--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Photo]     [Yosemite News]     [Yosemite Photos]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux