Re: [PATCH 00/24] InfiniBand Transport (IBTRS) and Network Block Device (IBNBD)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Bart,

On Mon, Feb 5, 2018 at 5:58 PM, Bart Van Assche <Bart.VanAssche@xxxxxxx> wrote:
> On Mon, 2018-02-05 at 14:16 +0200, Sagi Grimberg wrote:
>> - Your latency measurements are surprisingly high for a null target
>>    device (even for low end nvme device actually) regardless of the
>>    transport implementation.
>>
>> For example:
>> - QD=1 read latency is 648.95 for ibnbd (I assume usecs right?) which is
>>    fairly high. on nvme-rdma its 1058 us, which means over 1 millisecond
>>    and even 1.254 ms for srp. Last time I tested nvme-rdma read QD=1
>>    latency I got ~14 us. So something does not add up here. If this is
>>    not some configuration issue, then we have serious bugs to handle..
>>
>> - QD=16 the read latencies are > 10ms for null devices?! I'm having
>>    troubles understanding how you were able to get such high latencies
>>    (> 100 ms for QD>=100)
>>
>> Can you share more information about your setup? It would really help
>> us understand more.
>
> I would also appreciate it if more information could be provided about the
> measurement results. In addition to answering Sagi's questions, would it
> be possible to share the fio job that was used for measuring latency? In
> https://events.static.linuxfound.org/sites/events/files/slides/Copy%20of%20IBNBD-Vault-2017-5.pdf
> I found the following:
>
> iodepth=128
> iodepth_batch_submit=128
>
> If you want to keep the pipeline full I think that you need to set the
> iodepth_batch_submit parameter to a value that is much lower than iodepth.
> I think that setting iodepth_batch_submit equal to iodepth will yield
> suboptimal IOPS results. Jens, please correct me if I got this wrong.

Sorry, Bart, I would answer here in a few words (I would like to answer
in details tomorrow on Sagi's mail).

Everything (fio jobs, setup, etc) is given in the same link:

https://www.spinics.net/lists/linux-rdma/msg48799.html

at the bottom you will find links on google docs with many pages
and archived fio jobs and scripts. (I do not remember exactly,
one year passed, but there should be everything).

Regarding smaller iodepth_batch_submit - that decreases performance.
Once I played with that, even introduced new iodepth_batch_complete_max
option for fio, but then I decided to stop and simply chose this
configuration, which provides me fastest results.

--
Roman
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Photo]     [Yosemite News]     [Yosemite Photos]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux