Re: [PATCH] blk-mq: Wait for for hctx inflight requests on CPU unplug

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




I'd rather not work towards that
direction because:

1) it is very hard to partition global resources into several parts,
especially it is hard to make every part happy.

2) sbitmap is smart/efficient enough for this global allocation

3) no obvious improvement is obtained from the resource partition, according
to previous experiment result done by Kashyap.

I'd like to also do the test.

However I would need to forward port the patchset, which no longer cleanly
applies (I was referring to this https://lore.kernel.org/linux-block/20180205152035.15016-1-ming.lei@xxxxxxxxxx/).
Any help with that would be appreciated.

The queue type change causes patches not applied any more.

Could you just test the patch against v4.15 and see if there is any
improvement?


I'd rather test against latest mainline, but if it is too difficult then I can backport LLDD stuff and test against 4.15. It may take a while.

Even if it may improve performance on hisi_sas, I still suggest to not
use that approach to solve the issue for draining in-flight requests when
all CPUs of one hw queue becomes offline, since this way might hurt
performance on other drivers.



I think we could implement the drain mechanism in the following way:

1) if 'struct blk_mq_hw_ctx' serves as completion queue, use the
approach in the patch

Maybe the gain of exposing multiple queues+managed interrupts outweighs the
loss in the LLDD of having to generate this unique tag with sbitmap; I know

The unique tag has zero cost, see blk_mq_unique_tag().

But we want a tag unique in range [0, host tag count), which blk_mq_unique_tag() does not provide.


that we did not use sbitmap ever in the LLDD for generating the tag when
testing previously. However I'm still not too hopeful.


2) otherwise:
- introduce one callbcack of .prep_queue_dead(hctx, down_cpu) to
'struct blk_mq_ops'

This would not be allowed to block, right?

It is allowed to block in CPU hotplug handler.



- call .prep_queue_dead from blk_mq_hctx_notify_dead()

3) inside .prep_queue_dead():
- the driver checks if all mapped CPU on the completion queue is offline
- if yes, wait for in-flight requests originated from all CPUs mapped to
this completion queue, and it can be implemented as one block layer API

That could work. However I think that someone may ask why the LLDD just
doesn't register for the CPU hotplug event itself (which I would really
rather avoid), instead of being relayed the info from the block layer.

.prep_queue_dead() is run from blk-mq's CPU hotplug handler.

I also think of abstracting completion queue in blk-mq for hpsa,
hisi_sas_v3_hw and  mpt3sas, but that can't cover to drain device's internal
command, so looks it is inevitable for us to introduce driver callback.


On topic of internal commands, I assume that the approach would be to reserve tags in blk_mq_tag_set.reserved_tags (currently not set in scsi_mq_setup_tags()), and the LLDD would use blk_mq_alloc_request(,,BLK_MQ_REQ_RESERVED) to get a tag.

I guess that this may be Hannes' idea also (see "as then the block layer maintains all tags, and is able to figure out if the queue really is quiesced").

Thanks,
John


Thanks,
Ming

.






[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [SCSI Target Devel]     [Linux SCSI Target Infrastructure]     [Kernel Newbies]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Linux IIO]     [Samba]     [Device Mapper]

  Powered by Linux