Re: where is the irq effective affinity set from pci_alloc_irq_vectors_affinity()?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sun, 2024-08-04 at 16:14 -0400, Olivier Langlois wrote:
> I am trying to understand the result that the nvme driver has when it
> calls pci_alloc_irq_vectors_affinity() from nvme_setup_irqs()
> (drivers/nvme/host/pci.c)
> 
> $ cat /proc/interrupts | grep nvme
>  63:          9          0          0          0  PCI-MSIX-
> 0000:00:04.0
> 0-edge      nvme0q0
>  64:          0          0          0     237894  PCI-MSIX-
> 0000:00:04.0
> 1-edge      nvme0q1
> 
> $ cat /proc/irq/64/smp_affinity_list
> 0-3
> 
> $ cat /proc/irq/64/effective_affinity_list 
> 3
> 
> I think that this happens somewhere below pci_msi_setup_msi_irqs()
> (drivers/pci/msi/irqdomain.c)
> but I am losing track of what is done precisely because I am not sure
> of what is the irq_domain on my system.
> 
> I have experimented by playing with the nvme io queues num that is
> passed to pci_msi_setup_msi_irqs()
> as the max_vectors params.
> 
> The set irq effective affinity appears to always be the last cpu of
> the
> affinity mask.
> 
> I would like to have some control on the selected effective_affinity
> as
> I am trying to use NOHZ_FULL effectively on my system.
> 
> NOTE:
> I am NOT using irqbalance
> 
> thank you,
> 
I have found

arch/x86/kernel/apic/vector.c and kernel/irq/matrix.c

I think that matrix_find_best_cpu() should consider if a CPU is
NOHZ_FULL and NOT report it as the best cpu if there are other
options...

I'll try to play with the idea and report back if I get some success
with it...

Greetings,






[Index of Archives]     [DMA Engine]     [Linux Coverity]     [Linux USB]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [Greybus]

  Powered by Linux