On Fri, 2017-03-24 at 13:46 +0100, Jinpu Wang wrote: > Our IBNBD project was started 3 years ago based on our need for Cloud > Computing, NVMeOF is a bit younger. > - IBNBD is one of our components, part of our software defined storage solution. > - As I listed in features, IBNBD has it's own features > > We're planning to look more into NVMeOF, but it's not a replacement for IBNBD. Hello Jack, Danil and Roman, Thanks for having taken the time to open source this work and to travel to Boston to present this work at the Vault conference. However, my understanding of IBNBD is that this driver has several shortcomings neither NVMeOF nor iSER nor SRP have: * Doesn't scale in terms of number of CPUs submitting I/O. The graphs shown during the Vault talk clearly illustrate this. This is probably the result of sharing a data structure across all client CPUs, maybe the bitmap that tracks which parts of the target buffer space are in use. * Supports IB but none of the other RDMA transports (RoCE / iWARP). We also need performance numbers that compare IBNBD against SRP and/or NVMeOF with memory registration disabled to see whether and how much faster IBNBD is compared to these two protocols. The fact that IBNBD only needs to messages per I/O is an advantage it has today over SRP but not over NVMeOF nor over iSER. The upstream initiator drivers for the latter two protocols already support inline data. Another question I have is whether integration with multipathd is supported? If multipathd tries to run scsi_id against an IBNBD client device that will fail. Thanks, Bart.-- To unsubscribe from this list: send the line "unsubscribe linux-rdma" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html