Re: [PATCHv2] PCI: vmd: Use affinity to chain child device interrupts

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Mar 13, 2018 at 05:10:39PM +0000, Lorenzo Pieralisi wrote:
> On Tue, Feb 06, 2018 at 01:22:25PM -0700, Keith Busch wrote:
> > @@ -233,9 +266,11 @@ static int vmd_msi_prepare(struct irq_domain *domain, struct device *dev,
> >  	struct pci_dev *pdev = to_pci_dev(dev);
> >  	struct vmd_dev *vmd = vmd_from_bus(pdev->bus);
> >  
> > -	if (nvec > vmd->msix_count)
> > +	if (nvec > vmd->msix_count) {
> > +		if (vmd->msix_count > 1)
> > +			return vmd->msix_count - 1;
> >  		return vmd->msix_count;
> 
> I am about to apply this patch but I do not understand what's this hunk
> is there for, to me vmd_msi_prepare() should just return an error in
> this code path unless I am getting this wrong.

Hi Lorenzo,

It's not really an error if a driver requests more vectors than
can be allocated. The return here ultimately propogates back to
__pci_enable_msix, which returns 0 for sucess, < 0 for error, and >
0 if the requested count was too high.

The change above is fixing an off-by-one. It is really a bug fix on its
own, but it wasn't really harmful without the affinity awareness this
patch adds, where not having it starts to negatively affect performance.

Thanks,
Keith



[Index of Archives]     [DMA Engine]     [Linux Coverity]     [Linux USB]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [Greybus]

  Powered by Linux