Re: [PATCH] scsi: csiostor: remove automatic irq affinity assignment

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Aug 01, 2018 at 08:33:23AM +0200, Hannes Reinecke wrote:
> On 07/31/2018 05:07 PM, Varun Prakash wrote:
> > If number of interrupt vectors are more than num_online_cpus()
> > then pci_alloc_irq_vectors_affinity() assigns cpumask based
> > on num_possible_cpus() to the remaining vectors because of
> > this interrupt does not generate for these vectors.
> > 
> > This patch fixes this issue by using pci_alloc_irq_vectors()
> > instead of pci_alloc_irq_vectors_affinity().
> > 
> > Signed-off-by: Varun Prakash <varun@xxxxxxxxxxx>
> > ---
> >  drivers/scsi/csiostor/csio_isr.c | 4 +---
> >  1 file changed, 1 insertion(+), 3 deletions(-)
> > 


> >  
> > -	cnt = pci_alloc_irq_vectors_affinity(hw->pdev, min, cnt,
> > -			PCI_IRQ_MSIX | PCI_IRQ_AFFINITY, &desc);
> > +	cnt = pci_alloc_irq_vectors(hw->pdev, min, cnt,	PCI_IRQ_MSIX);
> >  	if (cnt < 0)
> >  		return cnt;
> >  
> > 
> Hmm.
> That patch (and the reasoning leading up to it) really sound dodgy.
> I'd rather fix the interrupt affinity code to handle this case correctly.

Yes, it is better to fix the affinity assignment code, I posted
this patch to start a discussion on this issue.

I can confirm that if number of interrupt vectors are more than
num_online cpus() then pci_alloc_irq_vectors_affinity() assigns
cpumask based on cpu_possible_mask to the remaining vectors. 

Following is the logic for creating affinity masks

struct cpumask *
irq_create_affinity_masks(int nvecs, const struct irq_affinity *affd)
{
       ...
        /* Spread on present CPUs starting from affd->pre_vectors */
        usedvecs = irq_build_affinity_masks(affd, curvec, affvecs,
                                            node_to_cpumask, cpu_present_mask,
                                            nmsk, masks);

        /*
         * Spread on non present CPUs starting from the next vector to be
         * handled. If the spreading of present CPUs already exhausted the
         * vector space, assign the non present CPUs to the already spread
         * out vectors.
         */
        if (usedvecs >= affvecs)
                curvec = affd->pre_vectors;
        else
                curvec = affd->pre_vectors + usedvecs;
        cpumask_andnot(npresmsk, cpu_possible_mask, cpu_present_mask);
        usedvecs += irq_build_affinity_masks(affd, curvec, affvecs,
                                             node_to_cpumask, npresmsk,
                                             nmsk, masks);
	...
}

> Thomas, can you help here?
> 



[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [SCSI Target Devel]     [Linux SCSI Target Infrastructure]     [Kernel Newbies]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Linux IIO]     [Samba]     [Device Mapper]

  Powered by Linux