Re: [PATCH 00/24] InfiniBand Transport (IBTRS) and Network Block Device (IBNBD)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Feb 6, 2018 at 5:01 PM, Bart Van Assche <Bart.VanAssche@xxxxxxx> wrote:
> On Tue, 2018-02-06 at 14:12 +0100, Roman Penyaev wrote:
>> On Mon, Feb 5, 2018 at 1:16 PM, Sagi Grimberg <sagi@xxxxxxxxxxx> wrote:
>> > [ ... ]
>> > - srp/scst comparison is really not fair having it in legacy request
>> >   mode. Can you please repeat it and report a bug to either linux-rdma
>> >   or to the scst mailing list?
>>
>> Yep, I can retest with mq.
>>
>> > - Your latency measurements are surprisingly high for a null target
>> >   device (even for low end nvme device actually) regardless of the
>> >   transport implementation.
>>
>> Hm, network configuration?  These are results on machines dedicated
>> to our team for testing in one of our datacenters. Nothing special
>> in configuration.
>

Hello Bart,

> I agree that the latency numbers are way too high for a null target device.
> Last time I measured latency for the SRP protocol against an SCST target
> + null block driver at the target side and ConnectX-3 adapters I measured a
> latency of about 14 microseconds. That's almost 100 times less than the
> measurement results in https://www.spinics.net/lists/linux-rdma/msg48799.html.

Here is the following configuration of the setup:

Initiator and target HW configuration:
    AMD Opteron 6386 SE, 64CPU, 128Gb
    InfiniBand: Mellanox Technologies MT26428
                [ConnectX VPI PCIe 2.0 5GT/s - IB QDR / 10GigE]

Also, I remember that between initiator and target there were two IB switches.
Unfortunately, I can't repeat the same configuration, but will retest as
soon as we get new HW.

> Something else I would like to understand better is how much of the latency
> gap between NVMeOF/SRP and IBNBD can be closed without changing the wire
> protocol. Was e.g. support for immediate data present in the NVMeOF and/or
> SRP drivers used on your test setup?

I did not get the question. IBTRS uses empty messages with only imm_data
field set to respond on IO. This is a part of the IBTRS protocol.  I do
not understand how can immediate data be present in other drivers, if
those do not use it in their protocols.  I am lost here.

> Are you aware that the NVMeOF target driver calls page_alloc() from the hot path but that there are plans to
> avoid these calls in the hot path by using a caching mechanism similar to
> the SGV cache in SCST? Are you aware that a significant latency reduction
> can be achieved by changing the SCST SGV cache from a global into a per-CPU
> cache?

No, I am not aware. That is nice, that there is a lot of room for performance
tweaks. I will definitely retest on fresh kernel once everything is done on
nvme, scst or ibtrs (especially when we get rid of fmrs and UNSAFE rkeys).
Maybe there are some other parameters which can be also tweaked?

> Regarding the SRP measurements: have you tried to set the
> never_register kernel module parameter to true? I'm asking this because I
> think that mode is most similar to how the IBNBD initiator driver works.

yes, according to my notes from that link (frankly, I do not remember,
but that is what I wrote 1 year ago):

    * Where suffixes mean:

     _noreg - modules on initiator side (ib_srp, nvme_rdma) were loaded
              with 'register_always=N' param

That what you are asking, right?

--
Roman



[Index of Archives]     [Linux RAID]     [Linux SCSI]     [Linux ATA RAID]     [IDE]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Device Mapper]

  Powered by Linux