On Mon, Aug 12, 2019 at 05:21:44PM +0100, John Garry wrote: > On 12/08/2019 14:46, Ming Lei wrote: > > Hi John, > > > > On Mon, Aug 12, 2019 at 09:43:07PM +0800, Ming Lei wrote: > > > Hi, > > > > > > Thomas mentioned: > > > " > > > That was the constraint of managed interrupts from the very beginning: > > > > > > The driver/subsystem has to quiesce the interrupt line and the associated > > > queue _before_ it gets shutdown in CPU unplug and not fiddle with it > > > until it's restarted by the core when the CPU is plugged in again. > > > " > > > > > > But no drivers or blk-mq do that before one hctx becomes dead(all > > > CPUs for one hctx are offline), and even it is worse, blk-mq stills tries > > > to run hw queue after hctx is dead, see blk_mq_hctx_notify_dead(). > > > > > > This patchset tries to address the issue by two stages: > > > > > > 1) add one new cpuhp state of CPUHP_AP_BLK_MQ_ONLINE > > > > > > - mark the hctx as internal stopped, and drain all in-flight requests > > > if the hctx is going to be dead. > > > > > > 2) re-submit IO in the state of CPUHP_BLK_MQ_DEAD after the hctx becomes dead > > > > > > - steal bios from the request, and resubmit them via generic_make_request(), > > > then these IO will be mapped to other live hctx for dispatch > > > > > > Please comment & review, thanks! > > > > > > V2: > > > - patch4 & patch 5 in V1 have been merged to block tree, so remove > > > them > > > - address comments from John Garry and Minwoo > > > > > > > > > Ming Lei (5): > > > blk-mq: add new state of BLK_MQ_S_INTERNAL_STOPPED > > > blk-mq: add blk-mq flag of BLK_MQ_F_NO_MANAGED_IRQ > > > blk-mq: stop to handle IO before hctx's all CPUs become offline > > > blk-mq: re-submit IO in case that hctx is dead > > > blk-mq: handle requests dispatched from IO scheduler in case that hctx > > > is dead > > > > > > block/blk-mq-debugfs.c | 2 + > > > block/blk-mq-tag.c | 2 +- > > > block/blk-mq-tag.h | 2 + > > > block/blk-mq.c | 143 +++++++++++++++++++++++++++++++++---- > > > block/blk-mq.h | 3 +- > > > drivers/block/loop.c | 2 +- > > > drivers/md/dm-rq.c | 2 +- > > > include/linux/blk-mq.h | 5 ++ > > > include/linux/cpuhotplug.h | 1 + > > > 9 files changed, 146 insertions(+), 16 deletions(-) > > > > > > Cc: Bart Van Assche <bvanassche@xxxxxxx> > > > Cc: Hannes Reinecke <hare@xxxxxxxx> > > > Cc: Christoph Hellwig <hch@xxxxxx> > > > Cc: Thomas Gleixner <tglx@xxxxxxxxxxxxx> > > > Cc: Keith Busch <keith.busch@xxxxxxxxx> > > > -- > > > 2.20.1 > > > > > > > Sorry for forgetting to Cc you. > > Already subscribed :) > > I don't mean to hijack this thread, but JFYI we're getting around to test > https://github.com/ming1/linux/commits/v5.2-rc-host-tags-V2 - unfortunately > we're still seeing a performance regression. I can't see where it's coming > from. We're double-checking the test though. host-tag patchset is only for several particular drivers which use private reply queue as completion queue. This patchset is for handling generic blk-mq CPU hotplug issue, and the several particular scsi drivers(hisi_sas_v3, hpsa, megaraid_sas and mp3sas) won't be covered so far. I'd suggest to move on for generic blk-mq devices first given now blk-mq is the only request IO path now. There are at least two choices for us to handle drivers/devices with private completion queue: 1) host-tags - performance issue shouldn't be hard to solve, given it is same with with single tags in theory, and just corner cases is there. What I am not glad with this approach is that blk-mq-tag code becomes mess. 2) private callback - we could define private callback to drain each completion queue in driver simply. - problem is that the four drivers have to duplicate the same job Thanks, Ming