On Tue, 2018-02-06 at 14:12 +0100, Roman Penyaev wrote: > On Mon, Feb 5, 2018 at 1:16 PM, Sagi Grimberg <sagi@xxxxxxxxxxx> wrote: > > [ ... ] > > - srp/scst comparison is really not fair having it in legacy request > > mode. Can you please repeat it and report a bug to either linux-rdma > > or to the scst mailing list? > > Yep, I can retest with mq. > > > - Your latency measurements are surprisingly high for a null target > > device (even for low end nvme device actually) regardless of the > > transport implementation. > > Hm, network configuration? These are results on machines dedicated > to our team for testing in one of our datacenters. Nothing special > in configuration. Hello Roman, I agree that the latency numbers are way too high for a null target device. Last time I measured latency for the SRP protocol against an SCST target + null block driver at the target side and ConnectX-3 adapters I measured a latency of about 14 microseconds. That's almost 100 times less than the measurement results in https://www.spinics.net/lists/linux-rdma/msg48799.html. Something else I would like to understand better is how much of the latency gap between NVMeOF/SRP and IBNBD can be closed without changing the wire protocol. Was e.g. support for immediate data present in the NVMeOF and/or SRP drivers used on your test setup? Are you aware that the NVMeOF target driver calls page_alloc() from the hot path but that there are plans to avoid these calls in the hot path by using a caching mechanism similar to the SGV cache in SCST? Are you aware that a significant latency reduction can be achieved by changing the SCST SGV cache from a global into a per-CPU cache? Regarding the SRP measurements: have you tried to set the never_register kernel module parameter to true? I'm asking this because I think that mode is most similar to how the IBNBD initiator driver works. Thanks, Bart.��.n��������+%������w��{.n�����{���fk��ܨ}���Ơz�j:+v�����w����ޙ��&�)ߡ�a����z�ޗ���ݢj��w�f