On Tue, Jul 06, 2021 at 12:32:27PM +0200, Hannes Reinecke wrote: > On 7/6/21 9:41 AM, Ming Lei wrote: > > On Tue, Jul 06, 2021 at 07:37:19AM +0200, Christoph Hellwig wrote: > > > On Mon, Jul 05, 2021 at 05:55:49PM +0800, Ming Lei wrote: > > > > The thing is that blk_mq_pci_map_queues() is allowed to be called for > > > > non-managed irqs. Also some managed irq consumers don't use blk_mq_pci_map_queues(). > > > > > > > > So this way just provides hint about managed irq uses, but we really > > > > need to get this flag set if driver uses managed irq. > > > > > > blk_mq_pci_map_queues is absolutely intended to only be used by > > > managed irqs. I wonder if we can enforce that somehow? > > > > It may break some scsi drivers. > > > > And blk_mq_pci_map_queues() just calls pci_irq_get_affinity() to > > retrieve the irq's affinity, and the irq can be one non-managed irq, > > which affinity is set via either irq_set_affinity_hint() from kernel > > or /proc/irq/. > > > But that's static, right? IE blk_mq_pci_map_queues() will be called once > during module init; so what happens if the user changes the mapping later > on? How will that be transferred to the driver? Yeah, that may not work well enough, but still works since non-managed irq supports migration. And there are several SCSI drivers which provide module parameter to enable/disable managed irq, meantime blk_mq_pci_map_queues() is always called for mapping queues. Thanks, Ming