On 19/07/2021 10:44, Christoph Hellwig wrote:
On Mon, Jul 19, 2021 at 08:51:22AM +0100, John Garry wrote:
Address this issue by adding one field of .irq_affinity_managed into
'struct device'.
Suggested-by: Christoph Hellwig <hch@xxxxxx>
Signed-off-by: Ming Lei <ming.lei@xxxxxxxxxx>
Did you consider that for PCI device we effectively have this info already:
bool dev_has_managed_msi_irq(struct device *dev)
{
struct msi_desc *desc;
list_for_each_entry(desc, dev_to_msi_list(dev), list)
I just noticed for_each_msi_entry(), which is the same
if (desc->affinity && desc->affinity->is_managed)
return true;
}
return false;
Just walking the list seems fine to me given that this is not a
performance criticial path. But what are the locking implications?
Since it would be used for sequential setup code, I didn't think any
locking was required. But would need to consider where that function
lived and whether it's public.
Also does the above imply this won't work for your platform MSI case?
.
Right. I think that it may be possible to reach into the platform msi
descriptors to get this info, but I am not sure it's worth it. There is
only 1x user there and there is no generic .map_queues function, so
could set the flag directly:
int blk_mq_pci_map_queues(struct blk_mq_queue_map *qmap, struct pci_dev
*pdev,
for_each_cpu(cpu, mask)
qmap->mq_map[cpu] = qmap->queue_offset + queue;
}
+ qmap->use_managed_irq = dev_has_managed_msi_irq(&pdev->dev);
}
--- a/drivers/scsi/hisi_sas/hisi_sas_v2_hw.c
+++ b/drivers/scsi/hisi_sas/hisi_sas_v2_hw.c
@@ -3563,6 +3563,8 @@ static int map_queues_v2_hw(struct Scsi_Host *shost)
qmap->mq_map[cpu] = qmap->queue_offset + queue;
}
+ qmap->use_managed_irq = 1;
+
return 0;
}