Re: blk-mq: improvement CPU hotplug (simplified version) v4

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, May 27, 2020 at 09:31:30PM +0100, John Garry wrote:
>> Thanks for having prepared and posted this new patch series. After v3
>> was posted and before v4 was posted I had a closer look at the IRQ core.
>> My conclusions (which may be incorrect) are as follows:
>> * The only function that sets the 'is_managed' member of struct
>>    irq_affinity_desc to 1 is irq_create_affinity_masks().
>> * There are two ways to cause that function to be called: setting the
>>    PCI_IRQ_AFFINITY flag when calling pci_alloc_irq_vectors_affinity() or
>>    passing the 'affd' argument. pci_alloc_irq_vectors() calls
>>    pci_alloc_irq_vectors_affinity().
>> * The following drivers pass an affinity domain argument when allocating
>>    interrupts: virtio_blk, nvme, be2iscsi, csiostor, hisi_sas, megaraid,
>>    mpt3sas, qla2xxx, virtio_scsi.
>> * The following drivers set the PCI_IRQ_AFFINITY flag but do not pass an
>>    affinity domain: aacraid, hpsa, lpfc, smartqpi, virtio_pci_common.
>>
>> What is not clear to me is why managed interrupts are shut down if the
>> last CPU in their affinity mask is shut down? Has it been considered to
>> modify the IRQ core such that managed PCIe interrupts are assigned to
>> another CPU if the last CPU in their affinity mask is shut down? 
>
> I think Thomas answered that here already:
> https://lore.kernel.org/lkml/alpine.DEB.2.21.1901291717370.1513@xxxxxxxxxxxxxxxxxxxxxxx/
>
> (vector space exhaustion)

Exactly.



[Index of Archives]     [Linux RAID]     [Linux SCSI]     [Linux ATA RAID]     [IDE]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Device Mapper]

  Powered by Linux