On Tue, Nov 17, 2020 at 12:59:46PM +0800, Weiping Zhang wrote: > On Tue, Nov 17, 2020 at 11:28 AM Ming Lei <ming.lei@xxxxxxxxxx> wrote: > > > > On Tue, Nov 17, 2020 at 11:01:49AM +0800, Weiping Zhang wrote: > > > Hi Jens, > > > > > > Ping > > > > Hello Weiping, > > > > Not sure we have to fix this issue, and adding blk_mq_queue_inflight() > > back to IO path brings cost which turns out to be visible, and I did > > get soft lockup report on Azure NVMe because of this kind of cost. > > > Have you test v5, this patch is different from v1, the v1 gets > inflight for each IO, > v5 has changed to get inflight every jiffer. I meant the issue can be reproduced on kernel before 5b18b5a73760("block: delete part_round_stats and switch to less precise counting"). Also do we really need to fix this issue? I understand device utilization becomes not accurate at very small load, is it really worth of adding runtime load in fast path for fixing this issue? > > If for v5, can we reproduce it on null_blk ? No, I just saw report on Azure NVMe. > > > BTW, suppose the io accounting issue needs to be fixed, just wondering > > why not simply revert 5b18b5a73760 ("block: delete part_round_stats and > > switch to less precise counting"), and the original way had been worked > > for decades. > > > This patch is more better than before, it will break early when find there is > inflight io on any cpu, for the worst case(the io in running on the last cpu), > it iterates all cpus. Please see the following case: 1) one device has 256 hw queues, and the system has 256 cpu cores, and each hw queue's depth is 1k. 2) there isn't any io load on CPUs(0 ~ 254) 3) heavy io load is run on CPU 255 So with your trick the code still need to iterate hw queues from 0 to 254, and the load isn't something which can be ignored. Especially it is just for io accounting. Thanks, Ming