Hi Daniel, On Wed, Apr 08, 2020 at 02:40:17PM +0200, Daniel Wagner wrote: > Hi Ming, > > On Tue, Apr 07, 2020 at 05:28:53PM +0800, Ming Lei wrote: > > Hi, > > > > Thomas mentioned: > > " > > That was the constraint of managed interrupts from the very beginning: > > > > The driver/subsystem has to quiesce the interrupt line and the associated > > queue _before_ it gets shutdown in CPU unplug and not fiddle with it > > until it's restarted by the core when the CPU is plugged in again. > > " > > > > But no drivers or blk-mq do that before one hctx becomes inactive(all > > CPUs for one hctx are offline), and even it is worse, blk-mq stills tries > > to run hw queue after hctx is dead, see blk_mq_hctx_notify_dead(). > > > > This patchset tries to address the issue by two stages: > > > > 1) add one new cpuhp state of CPUHP_AP_BLK_MQ_ONLINE > > > > - mark the hctx as internal stopped, and drain all in-flight requests > > if the hctx is going to be dead. > > > > 2) re-submit IO in the state of CPUHP_BLK_MQ_DEAD after the hctx becomes dead > > > > - steal bios from the request, and resubmit them via generic_make_request(), > > then these IO will be mapped to other live hctx for dispatch > > > > Please comment & review, thanks! > > FWIW, I've stress test this series by running the stress-cpu-hotplug > with a fio workload in the background. Nothing exploded, all just > worked fine. Thanks for your test! Especially this patch changes flush & passthrough IO handling during CPU hotplug, if possible, please include the two kinds of background IO when running cpu hotplug test. BTW, I verified the patches by running 'dbench -s 64' & concurrent NVMe user IO during cpu hotplug, looks it works fine. Also there is one known performance drop issue reported by John, which has been addressed in the following link: https://github.com/ming1/linux/commit/1cfbe1b2f7fd7085bc86e09c6443a20e89142975 Thanks, Ming