Re: Question about poll_multi_file

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 6/6/21 3:08 PM, Hao Xu wrote:
> 在 2021/6/4 上午2:01, Jens Axboe 写道:
>> On 6/3/21 6:53 AM, Hao Xu wrote:
>>> Hi Jens,
>>> I've a question about poll_multi_file in io_do_iopoll().
>>> It keeps spinning in f_op->iopoll() if poll_multi_file is
>>> true (and we're under the requested amount). But in my
>>> understanding, reqs may be in different hardware queues
>>> for blk-mq device even in this situation.
>>> Should we consider the hardware queue number as well? Some
>>> thing like below:
>>
>> That looks reasonable to me - do you have any performance
>> numbers to go with it?
> 
> Not very easy for me to construct a good case. I'm trying to
> mock the below situation:
> manully control uring reqs to go to 2 hardware queues, like:
>    hw_queue0     hw_queue1
>    heavy_req     simple_req
>    heavy_req     simple_req
>      ...            ...
> 
> heavy_req is some request that needs more time to complete,
> while simple_req takes less time. And make the io_do_iopoll()
> alway first spin on hw_queue0.
> any ideas?

- NVMe with #HW qs >= #CPUs, so HW to SW qs are 1-to-1.
- 2 threads pinned to different CPUs, so they submit to
different qs.

Then one thread is doing 512B rand reads, and the second
is doing 64-128 KB rand reads. So, I'd expect a latency
spike on some nine. Not tested, so just a suggestion.

The second can also be doing writes, but that would need
1) waiting for steady state
2) higher QD/load for writes because otherwise SSD
caches might hide waiting.

-- 
Pavel Begunkov



[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux