On 08/04/2020 13:40, Daniel Wagner wrote:
Hi Ming,
On Tue, Apr 07, 2020 at 05:28:53PM +0800, Ming Lei wrote:
Hi,
Thomas mentioned:
"
That was the constraint of managed interrupts from the very beginning:
The driver/subsystem has to quiesce the interrupt line and the associated
queue _before_ it gets shutdown in CPU unplug and not fiddle with it
until it's restarted by the core when the CPU is plugged in again.
"
But no drivers or blk-mq do that before one hctx becomes inactive(all
CPUs for one hctx are offline), and even it is worse, blk-mq stills tries
to run hw queue after hctx is dead, see blk_mq_hctx_notify_dead().
This patchset tries to address the issue by two stages:
1) add one new cpuhp state of CPUHP_AP_BLK_MQ_ONLINE
- mark the hctx as internal stopped, and drain all in-flight requests
if the hctx is going to be dead.
2) re-submit IO in the state of CPUHP_BLK_MQ_DEAD after the hctx becomes dead
- steal bios from the request, and resubmit them via generic_make_request(),
then these IO will be mapped to other live hctx for dispatch
Please comment & review, thanks!
FWIW, I've stress test this series by running the stress-cpu-hotplug
with a fio workload in the background. Nothing exploded, all just
worked fine.
Hi Daniel,
Is stress-cpu-hotplug an ltp test? or from Steven Rostedt - I saw some
threads where he mentioned some script?
Will the fio processes migrate back onto cpus which have been onlined again?
What is the block driver NVMe?
Thanks,
john