On Mon, Jul 10, 2023 at 10:51:43AM -0600, Keith Busch wrote: > On Mon, Jul 10, 2023 at 05:14:15PM +0800, Ming Lei wrote: > > On Mon, Jul 10, 2023 at 08:41:09AM +0200, Christoph Hellwig wrote: > > > On Sat, Jul 08, 2023 at 10:02:59AM +0800, Ming Lei wrote: > > > > Take blk-mq's knowledge into account for calculating io queues. > > > > > > > > Fix wrong queue mapping in case of kdump kernel. > > > > > > > > On arm and ppc64, 'maxcpus=1' is passed to kdump command line, see > > > > `Documentation/admin-guide/kdump/kdump.rst`, so num_possible_cpus() > > > > still returns all CPUs. > > > > > > That's simply broken. Please fix the arch code to make sure > > > it does not return a bogus num_possible_cpus value for these > > > > That is documented in Documentation/admin-guide/kdump/kdump.rst. > > > > On arm and ppc64, 'maxcpus=1' is passed for kdump kernel, and "maxcpu=1" > > simply keep one of CPU cores as online, and others as offline. > > > > So Cc our arch(arm & ppc64) & kdump guys wrt. passing 'maxcpus=1' for > > kdump kernel. > > > > > setups, otherwise you'll have to paper over it in all kind of > > > drivers. > > > > The issue is only triggered for drivers which use managed irq & > > multiple hw queues. > > Is the problem that the managed interrupt sets the effective irq > affinity to an offline CPU? You mentioned observed timeouts; are you Yes, the problem is that blk-mq only creates hctx0, so nvme-pci translate it into hctx0's nvme_queue, this way is actually wrong, cause blk-mq's view on queue topo isn't same with nvme's view. > seeing the "completion polled" nvme message? Yes, "completion polled" can be observed. Meantime the warning in __irq_startup_managed() can be triggered from nvme_timeout()->nvme_poll_irqdisable()->enable_irq(). Thanks, Ming