On Thu, Mar 21, 2024 at 11:07:52AM -0600, Jens Axboe wrote: > On 3/19/24 8:34 PM, Ming Lei wrote: > > Kernel parameter of `isolcpus=` or 'nohz_full=' are used to isolate CPUs > > for specific task, and it isn't expected to let block IO disturb these CPUs. > > blk-mq kworker shouldn't be scheduled on isolated CPUs. Also if isolated > > CPUs is run for blk-mq kworker, long block IO latency can be caused. > > > > Kernel workqueue only respects CPU isolation for WQ_UNBOUND, for bound > > WQ, the responsibility is on user because CPU is specified as WQ API > > parameter, such as mod_delayed_work_on(cpu), queue_delayed_work_on(cpu) > > and queue_work_on(cpu). > > > > So not run blk-mq kworker on isolated CPUs by removing isolated CPUs > > from hctx->cpumask. Meantime use queue map to check if all CPUs in this > > hw queue are offline instead of hctx->cpumask, this way can avoid any > > cost in fast IO code path, and is safe since hctx->cpumask are only > > used in the two cases. > > In general, I think the fix is fine. Only thing that's a bit odd is: Thanks for the review! > > > diff --git a/block/blk-mq.c b/block/blk-mq.c > > index 555ada922cf0..187fbfacb397 100644 > > --- a/block/blk-mq.c > > +++ b/block/blk-mq.c > > @@ -28,6 +28,7 @@ > > #include <linux/prefetch.h> > > #include <linux/blk-crypto.h> > > #include <linux/part_stat.h> > > +#include <linux/sched/isolation.h> > > > > #include <trace/events/block.h> > > > > @@ -2179,7 +2180,11 @@ static int blk_mq_hctx_next_cpu(struct blk_mq_hw_ctx *hctx) > > bool tried = false; > > int next_cpu = hctx->next_cpu; > > > > - if (hctx->queue->nr_hw_queues == 1) > > + /* > > + * Switch to unbound work if all CPUs in this hw queue fall > > + * into isolated CPUs > > + */ > > + if (hctx->queue->nr_hw_queues == 1 || next_cpu >= nr_cpu_ids) > > return WORK_CPU_UNBOUND; > > This relies on find_next_foo() returning >= nr_cpu_ids if the set is > empty, which is a lower level implementation detail that someone reading > this code may not know. Indeed, looks it is more readable to add one helper: static bool blk_mq_hctx_empty_cpumask(struct blk_mq_hw_ctx *hctx) { return hctx->next_cpu >= nr_cpu_ids; } > > > if (--hctx->next_cpu_batch <= 0) { > > @@ -3488,14 +3493,30 @@ static bool blk_mq_hctx_has_requests(struct blk_mq_hw_ctx *hctx) > > return data.has_rq; > > } > > > > -static inline bool blk_mq_last_cpu_in_hctx(unsigned int cpu, > > - struct blk_mq_hw_ctx *hctx) > > +static bool blk_mq_hctx_has_online_cpu(struct blk_mq_hw_ctx *hctx, > > + unsigned int this_cpu) > > { > > - if (cpumask_first_and(hctx->cpumask, cpu_online_mask) != cpu) > > - return false; > > - if (cpumask_next_and(cpu, hctx->cpumask, cpu_online_mask) < nr_cpu_ids) > > - return false; > > - return true; > > + enum hctx_type type = hctx->type; > > + int cpu; > > + > > + /* > > + * hctx->cpumask has rule out isolated CPUs, but userspace still > ^^ > > has to > > > + * might submit IOs on these isolated CPUs, so use queue map to > ^^ > > use the queue map OK, will fix them in V5. thanks, Ming