On Wed, Aug 18, 2021 at 10:44:25PM +0800, Ming Lei wrote: > Hi, > > blk_mq_alloc_request_hctx() is used by NVMe fc/rdma/tcp/loop to connect > io queue. Also the sw ctx is chosen as the 1st online cpu in hctx->cpumask. > However, all cpus in hctx->cpumask may be offline. > > This usage model isn't well supported by blk-mq which supposes allocator is > always done on one online CPU in hctx->cpumask. This assumption is > related with managed irq, which also requires blk-mq to drain inflight > request in this hctx when the last cpu in hctx->cpumask is going to > offline. > > However, NVMe fc/rdma/tcp/loop don't use managed irq, so we should allow > them to ask for request allocation when the specified hctx is inactive > (all cpus in hctx->cpumask are offline). Fix blk_mq_alloc_request_hctx() by > allowing to allocate request when all CPUs of this hctx are offline. > > Wen Xiong has verified V4 in her nvmef test. > > V7: > - move blk_mq_hctx_use_managed_irq() into block/blk-mq.c, 3/3 Hello Jens, NVMe TCP and others have been a bit popular recent days, and the kernel panic of blk_mq_alloc_request_hctx() has annoyed people for a bit long. Any chance to pull the three patches in so we can fix them in 5.15? Thanks, Ming