On Tue, Aug 06, 2024 at 10:55:09PM GMT, Ming Lei wrote: > On Tue, Aug 06, 2024 at 02:06:47PM +0200, Daniel Wagner wrote: > > When isolcpus=io_queue is enabled all hardware queues should run on the > > housekeeping CPUs only. Thus ignore the affinity mask provided by the > > driver. Also we can't use blk_mq_map_queues because it will map all CPUs > > to first hctx unless, the CPU is the same as the hctx has the affinity > > set to, e.g. 8 CPUs with isolcpus=io_queue,2-3,6-7 config > > What is the expected behavior if someone still tries to submit IO on isolated > CPUs? If a user thread is issuing an IO the IO is handled by the housekeeping CPU, which will cause some noise on the submitting CPU. As far I was told this is acceptable. Our customers really don't want to have any IO not from their application ever hitting the isolcpus. When their application is issuing an IO. > BTW, I don't see any change in blk_mq_get_ctx()/blk_mq_map_queue() in this > patchset, I was trying to figure out what you tried to explain last time with hangs, but didn't really understand what the conditions are for this problem to occur. > that means one random hctx(or even NULL) may be used for submitting > IO from isolated CPUs, > then there can be io hang risk during cpu hotplug, or > kernel panic when submitting bio. Can you elaborate a bit more? I must miss something important here. Anyway, my understanding is that when the last CPU of a hctx goes offline the affinity is broken and assigned to an online HK CPU. And we ensure all flight IO have finished and also ensure we don't submit any new IO to a CPU which goes offline. FWIW, I tried really hard to get an IO hang with cpu hotplug.